Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Google+. Show all posts

Herodotus Trojan Mimics Human Typing to Steal Banking Credentials

 



A newly discovered Android malware, Herodotus, is alarming cybersecurity experts due to its unique ability to imitate human typing. This advanced technique allows the malware to avoid fraud detection systems and secretly steal sensitive financial information from unsuspecting users.

According to researchers from Dutch cybersecurity firm ThreatFabric, Herodotus combines elements from older malware families like Brokewell with newly written code, creating a hybrid trojan that is both deceptive and technically refined. The malware’s capabilities include logging keystrokes, recording screen activity, capturing biometric data, and hijacking user inputs in real time.


How users get infected

Herodotus spreads mainly through side-loading, a process where users install applications from outside the official Google Play Store. Attackers are believed to use SMS phishing (smishing) campaigns that send malicious links disguised as legitimate messages. Clicking on these links downloads a small installer, also known as a dropper, that delivers the actual malware to the device.

Once installed, the malware prompts victims to enable Android Accessibility Services, claiming it is required for app functionality. However, this permission gives the attacker total control,  allowing them to read content on the screen, click buttons, swipe, and interact with any open application as if they were the device owner.


The attack mechanism

After the infection, Herodotus collects a list of all installed apps and sends it to its command-and-control (C2) server. Based on this data, the operator pushes overlay pages, fake screens designed to look identical to genuine banking or cryptocurrency apps. When users open their actual financial apps, these overlays appear on top, tricking victims into entering login details, card numbers, and PINs.

The malware can also intercept one-time passwords (OTPs) sent via SMS, record keystrokes, and even stream live footage of the victim’s screen. With these capabilities, attackers can execute full-scale device takeover attacks, giving them unrestricted access to the user’s financial accounts.


The human-like typing trick

What sets Herodotus apart is its behavioral deception technique. To appear human during remote-control sessions, the malware adds random time delays between keystrokes, ranging from 0.3 to 3 seconds. This mimics natural human typing speed instead of the instant input patterns of automated tools.

Fraud detection systems that rely solely on input timing often fail to recognize these attacks because the malware’s simulated typing appears authentic. Analysts warn that as Herodotus continues to evolve, it may become even harder for traditional detection tools to identify.


Active regions and underground sale

ThreatFabric reports that the malware has already been used in Italy and Brazil, disguising itself as apps named “Banca Sicura” and “Modulo Seguranca Stone.” Researchers also found fake login pages imitating popular banking and cryptocurrency platforms in the United States, United Kingdom, Turkey, and Poland.

The malware’s developer, who goes by the alias “K1R0” on underground forums, began offering Herodotus as a Malware-as-a-Service (MaaS) product in September. This means other cybercriminals can rent or purchase it for use in their own campaigns, further increasing the likelihood of global spread.

Google confirmed that Play Protect already blocks known versions of Herodotus. Users can stay protected by avoiding unofficial downloads, ignoring links in unexpected text messages, and keeping Play Protect active. It is also crucial to avoid granting Accessibility permissions unless an app’s legitimacy is verified.

Security professionals advise enabling stronger authentication methods, such as app-based verification instead of SMS-based codes, and keeping both system and app software regularly updated.


Google Probes Weeks-Long Security Breach Linked to Contractor Access

 




Google has launched a detailed investigation into a weeks-long security breach after discovering that a contractor with legitimate system privileges had been quietly collecting internal screenshots and confidential files tied to the Play Store ecosystem. The company uncovered the activity only after it had continued for several weeks, giving the individual enough time to gather sensitive technical data before being detected.

According to verified cybersecurity reports, the contractor managed to access information that explained the internal functioning of the Play Store, Google’s global marketplace serving billions of Android users. The files reportedly included documentation describing the structure of Play Store infrastructure, the technical guardrails that screen malicious apps, and the compliance systems designed to meet international data protection laws. The exposure of such material presents serious risks, as it could help malicious actors identify weaknesses in Google’s defense systems or replicate its internal processes to deceive automated security checks.

Upon discovery of the breach, Google initiated a forensic review to determine how much information was accessed and whether it was shared externally. The company has also reported the matter to law enforcement and begun a complete reassessment of its third-party access procedures. Internal sources indicate that Google is now tightening security for all contractor accounts by expanding multi-factor authentication requirements, deploying AI-based systems to detect suspicious activities such as repeated screenshot captures, and enforcing stricter segregation of roles and privileges. Additional measures include enhanced background checks for third-party employees who handle sensitive systems, as part of a larger overhaul of Google’s contractor risk management framework.

Experts note that the incident arrives during a period of heightened regulatory attention on Google’s data protection and antitrust practices. The breach not only exposes potential security weaknesses but also raises broader concerns about insider threats, one of the most persistent and challenging issues in cybersecurity. Even companies that invest heavily in digital defenses remain vulnerable when authorized users intentionally misuse their access for personal gain or external collaboration.

The incident has also revived discussion about earlier insider threat cases at Google. In one of the most significant examples, a former software engineer was charged with stealing confidential files related to Google’s artificial intelligence systems between 2022 and 2023. Investigators revealed that he had transferred hundreds of internal documents to personal cloud accounts and even worked with external companies while still employed at Google. That case, which resulted in multiple charges of trade secret theft and economic espionage, underlined how intellectual property theft by insiders can evolve into major national security concerns.

For Google, the latest breach serves as another reminder that internal misuse, whether by employees or contractors remains a critical weak point. As the investigation continues, the company is expected to strengthen oversight across its global operations. Cybersecurity analysts emphasize that organizations managing large user platforms must combine strong technical barriers with vigilant monitoring of human behavior to prevent insider-led compromises before they escalate into large-scale risks.



Gmail Credentials Appear in Massive 183 Million Infostealer Data Leak, but Google Confirms No New Breach




A vast cache of 183 million email addresses and passwords has surfaced in the Have I Been Pwned (HIBP) database, raising concern among Gmail users and prompting Google to issue an official clarification. The newly indexed dataset stems from infostealer malware logs and credential-stuffing lists collected over time, rather than a fresh attack targeting Gmail or any other single provider.


The Origin of the Dataset

The large collection, analyzed by HIBP founder Troy Hunt, contains records captured by infostealer malware that had been active for nearly a year. The data, supplied by Synthient, amounted to roughly 3.5 terabytes, comprising nearly 23 billion rows of stolen information. Each entry typically includes a website name, an email address, and its corresponding password, exposing a wide range of online accounts across various platforms.

Synthient’s Benjamin Brundage explained that this compilation was drawn from continuous monitoring of underground marketplaces and malware operations. The dataset, referred to as the “Synthient threat data,” was later forwarded to HIBP for indexing and public awareness.


How Much of the Data Is New

Upon analysis, Hunt discovered that most of the credentials had appeared in previous breaches. Out of a 94,000-record sample, about 92 percent matched older data, while approximately 8 percent represented new and unseen credentials. This translates to over 16 million previously unrecorded email addresses, fresh data that had not been part of any known breaches or stealer logs before.

To test authenticity, Hunt contacted several users whose credentials appeared in the sample. One respondent verified that the password listed alongside their Gmail address was indeed correct, confirming that the dataset contained legitimate credentials rather than fabricated or corrupted data.


Gmail Accounts Included, but No Evidence of a Gmail Hack

The inclusion of Gmail addresses led some reports to suggest that Gmail itself had been breached. However, Google has publicly refuted these claims, stating that no new compromise has taken place. According to Google, the reports stem from a misunderstanding of how infostealer databases operate, they simply aggregate previously stolen credentials from different malware incidents, not from a new intrusion into Gmail systems.

Google emphasized that Gmail’s security systems remain robust and that users are protected through ongoing monitoring and proactive account protection measures. The company said it routinely detects large credential dumps and initiates password resets to protect affected accounts.

In a statement, Google advised users to adopt stronger account protection measures: “Reports of a Gmail breach are false. Infostealer databases gather credentials from across the web, not from a targeted Gmail attack. Users can enhance their safety by enabling two-step verification and adopting passkeys as a secure alternative to passwords.”


What Users Should Do

Experts recommend that individuals check their accounts on Have I Been Pwned to determine whether their credentials appear in this dataset. Users are also advised to enable multi-factor authentication, switch to passkeys, and avoid reusing passwords across multiple accounts.

Gmail users can utilize Google’s built-in Password Manager to identify weak or compromised passwords. The password checkup feature, accessible from Chrome’s settings, can alert users about reused or exposed credentials and prompt immediate password changes.

If an account cannot be accessed, users should proceed to Google’s account recovery page and follow the verification steps provided. Google also reminded users that it automatically requests password resets when it detects exposure in large credential leaks.


The Broader Security Implications

Cybersecurity professionals stress that while this incident does not involve a new system breach, it reinforces the ongoing threat posed by infostealer malware and poor password hygiene. Sachin Jade, Chief Product Officer at Cyware, highlighted that credential monitoring has become a vital part of any mature cybersecurity strategy. He explained that although this dataset results from older breaches, “credential-based attacks remain one of the leading causes of data compromise.”

Jade further noted that organizations should integrate credential monitoring into their broader risk management frameworks. This helps security teams prioritize response strategies, enforce adaptive authentication, and limit lateral movement by attackers using stolen passwords.

Ultimately, this collection of 183 million credentials serves as a reminder that password leaks, whether new or recycled, continue to feed cybercriminal activity. Continuous vigilance, proactive password management, and layered security practices remain the strongest defenses against such risks.


Is ChatGPT's Atlas Browser the Future of Internet?

Is ChatGPT's Atlas Browser the Future of Internet?

After using ChatGPT Atlas, OpenAI's new web browser, users may notice few issues. This is not the same as Google Chrome, which about 60% of users use. It is based on a chatbot that you are supposed to converse with in order to browse the internet.  

One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.  

Following that: "You've hit the free plan limit for GPT-5."  

Paid browser 

According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.

According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.

Competitors, data, and money

The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.

The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.  

Will it kill Google?

Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.

Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.

Google’s Quantum Breakthrough Rekindles Concerns About Bitcoin’s Long-Term Security

 




Google has announced a verified milestone in quantum computing that has once again drawn attention to the potential threat quantum technology could pose to Bitcoin and other digital systems in the future.

The company’s latest quantum processor, Willow, has demonstrated a confirmed computational speed-up over the world’s leading supercomputers. Published in the journal Nature, the findings mark the first verified example of a quantum processor outperforming classical machines in a real experiment.

This success brings researchers closer to the long-envisioned goal of building reliable quantum computers and signals progress toward machines that could one day challenge the cryptography protecting cryptocurrencies.


What Google Achieved

According to Google’s study, the 105-qubit Willow chip ran a physics algorithm faster than any known classical system could simulate. This achievement, often referred to as “quantum advantage,” shows that quantum processors are starting to perform calculations that are practically impossible for traditional computers.

The experiment used a method called Quantum Echoes, where researchers advanced a quantum system through several operations, intentionally disturbed one qubit, and then reversed the sequence to see if the information would reappear. The re-emergence of this information, known as a quantum echo, confirmed the system’s interference patterns and genuine quantum behavior.

In measurable terms, Willow completed the task in just over two hours, while Frontier, one of the world’s fastest publicly benchmarked supercomputers, would need about 3.2 years to perform the same operation. That represents a performance difference of nearly 13,000 times.

The results were independently verified and can be reproduced by other quantum systems, a major step forward from previous experiments that lacked reproducibility. Google CEO Sundar Pichai noted on X that this outcome is “a substantial step toward the first real-world application of quantum computing.”

Willow’s superconducting transmon qubits achieved an impressive level of stability. The chip recorded median two-qubit gate errors of 0.0015 and maintained coherence times above 100 microseconds, allowing scientists to execute 23 layers of quantum operations across 65 qubits. This pushed the system beyond what classical models can reproduce and proved that complex, multi-layered quantum circuits can now be managed with high accuracy.


From Sycamore to Willow

The Willow processor, unveiled in December 2024, is a successor to Google’s Sycamore chip from 2019, which first claimed quantum supremacy but lacked experimental consistency. Willow bridges that gap by introducing stronger error correction and better coherence, enabling experiments that can be repeated and verified within the same hardware.

While the processor is still in a research phase, its stability and reproducibility represent significant engineering progress. The experiment also confirmed that quantum interference can persist in systems too complex for classical simulation, which strengthens the case for practical quantum applications.


Toward Real-World Uses

Google now plans to move beyond proof-of-concept demonstrations toward practical quantum simulations, such as modeling atomic and molecular interactions. These tasks are vital for fields like drug discovery, battery design, and material science, where classical computers struggle to handle the enormous number of variables involved.

In collaboration with the University of California, Berkeley, Google recently demonstrated a small-scale quantum experiment to model molecular systems, marking an early step toward what the company calls a “quantum-scope” — a tool capable of observing natural phenomena that cannot be measured using classical instruments.


The Bitcoin Question

Although Willow’s success does not pose an immediate threat to Bitcoin, it has revived discussions about how close quantum computers are to breaking elliptic-curve cryptography (ECC), which underpins most digital financial systems. ECC is nearly impossible for classical computers to reverse-engineer, but it could theoretically be broken by a powerful quantum system running algorithms such as Shor’s algorithm.

Experts caution that this risk remains distant but credible. Christopher Peikert, a professor of computer science and engineering at the University of Michigan, told Decrypt that quantum computing has a small but significant chance, over five percent, of becoming a major long-term threat to cryptocurrencies.

He added that moving to post-quantum cryptography would address these vulnerabilities, but the trade-offs include larger keys and signatures, which would increase network traffic and block sizes.


Why It Matters

Simulating Willow’s circuits using tensor-network algorithms would take more than 10 million CPU-hours on Frontier. The contrast between two hours of quantum computation and several years of classical simulation offers clear evidence that practical quantum advantage is becoming real.

The Willow experiment transitions quantum research from theory to testable engineering. It shows that real hardware can perform verified calculations that classical computers cannot feasibly replicate.

For cybersecurity professionals and blockchain developers, this serves as a reminder that quantum resistance must now be part of long-term security planning. The countdown toward a quantum future has already begun, and with each verified advance, that future moves closer to reality.



Hackers Exploit Blockchain Networks to Hide and Deliver Malware, Google Warns

 



Google’s Threat Intelligence Group has uncovered a new wave of cyberattacks where hackers are using public blockchains to host and distribute malicious code. This alarming trend transforms one of the world’s most secure and tamper-resistant technologies into a stealthy channel for cybercrime.

According to Google’s latest report, several advanced threat actors, including one group suspected of operating on behalf of North Korea have begun embedding harmful code into smart contracts on major blockchain platforms such as Ethereum and the BNB Smart Chain. The technique, known as “EtherHiding,” allows attackers to conceal malware within the blockchain itself, creating a nearly untraceable and permanent delivery system.

Smart contracts were originally designed to enable transparent and trustworthy transactions without intermediaries. However, attackers are now exploiting their immutability to host malware that cannot be deleted or blocked. Once malicious code is written into a blockchain contract, it becomes permanently accessible to anyone who knows how to retrieve it.

This innovation replaces the need for traditional “bulletproof hosting” services, offshore servers that cybercriminals once used to evade law enforcement. By using blockchain networks instead, hackers can distribute malicious software at a fraction of the cost, often paying less than two dollars per contract update.

The decentralized nature of these systems eliminates any single point of failure, meaning there is no authority capable of taking down the malicious data. Even blockchain’s anonymity features benefit attackers, as retrieving code from smart contracts leaves no identifiable trace in transaction logs.


How the Attacks Unfold

Google researchers observed that hackers often begin their campaigns with social engineering tactics targeting software developers. Pretending to be recruiters, they send job offers that require the victims to complete “technical tasks.” The provided test files secretly install the initial stage of malware.

Once the system is compromised, additional malicious components are fetched directly from smart contracts stored on Ethereum or BNB Smart Chain. This multi-layered strategy enables attackers to modify or update their payloads anytime without being detected by conventional cybersecurity tools.

Among the identified actors, UNC5342, a North Korea-linked hacking collective, uses a downloader called JadeSnow to pull secondary payloads hidden within blockchain contracts. In several incidents, the group switched between Ethereum and BNB Smart Chain mid-operation; a move possibly motivated by lower transaction fees or operational segmentation. Another financially driven group, UNC5142, has reportedly adopted the same approach, signaling a broader trend among sophisticated threat actors.


The findings stress upon how cybercriminals are reimagining blockchain’s purpose. A tool built for transparency and trust is now being reshaped into an indestructible infrastructure for malware delivery.

Analysts also note that North Korea’s cyber operations have become more advanced in recent years. Blockchain research firm Elliptic estimated earlier this month that North Korean-linked hackers have collectively stolen over $2 billion in digital assets since early 2025.

Security experts warn that as blockchain adoption expands, defenders must develop new strategies to monitor and counter such decentralized threats. Traditional takedown mechanisms will no longer suffice when malicious data resides within a public, unchangeable ledger.



Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

Social Event App Partiful Did Not Collect GPS Locations from Photos

 

Social event planning app Partiful, also known as "Facebook events for hot people," has replaced Facebook as the go-to place for sending party invites. However, like Facebook, Partiful also collects user data. 

The hosts can create online invitations in a retro style, which allows users to RSVP to events easily. The platform strives to be user-friendly and trendy, which has made the app No.9 on the Apple store, and Google has called it "the best app" of 2024. 

About Partiful

Partiful has recently developed into a Facebook-like social graph; it maps your friends and also friends of friends, what you do, where you go, and your contact numbers. When the app became famous, people started doubting its origins, alleging that the app had former employees of a data-mining company. TechCrunch, however, found that the app was not storing any location data from user-uploaded images, which include public profile pictures. 

Metadata in photos

The photos that you have on your phones have metadata, which consists of file size, date of capture. With videos, Metadata can include information such as the type of camera used, the settings, and latitude/longitude coordinates. TechCrunch discovered that anyone could use the developer tools in a web browser to get raw user profile photos access from Partiful’s back-end database on Google Firebase. 

About the bug

The flaw could have been problematic, as it could have exposed the location of a person’s profile photo if someone used Partiful. 

According to TechCrunch, “Some Partiful user profile photos contained highly granular location data that could be used to identify the person’s home or work, particularly in rural areas where individual homes are easier to distinguish on a map.”

It is a common norm for companies hosting user photos and videos to automatically remove metadata once uploaded to prevent privacy issues, such as Partiful.

Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

Google Messages Adds QR Code Verification to Prevent Impersonation Scams

 

Google is preparing to roll out a new security feature in its Messages app that adds another layer of protection against impersonation scams. The update, now available in beta, introduces a QR code system to verify whether the person you are chatting with is using a legitimate device. The move is part of Google’s broader effort to strengthen end-to-end encryption and make it easier for users to confirm the authenticity of their contacts.  

Previously, Google Messages allowed users to verify encryption by exchanging and manually comparing an 80-digit code. While effective, the process was cumbersome and rarely used by everyday users. The new QR code option simplifies this verification method by allowing contacts to scan each other’s codes directly. Once scanned, Google can confirm the identity of the devices involved in the conversation and alert users if suspicious or unauthorized activity is detected. This makes it harder for attackers to impersonate contacts or intercept conversations unnoticed. 

According to reports, the feature will be available on devices running Android 9 and higher later this year. For those enrolled in the beta program, it can already be found within the Google Messages app. Users can access it by opening a conversation, tapping on the contact’s name, and navigating to the “End-to-end encryption” section under the details menu. Within that menu, the “Verify encryption” option now provides two methods: manually comparing the 80-digit code or scanning a QR code. 

To complete the process, both participants must scan each other’s codes, after which the devices are marked as verified. Though integration with the “Connected apps” section in the Contacts app has been hinted at, this functionality has not yet gone live. The addition of QR-based verification comes as part of a larger wave of updates designed to modernize and secure Google Messages. Recently, Google introduced a “Delete for everyone” option, giving users more control over sent messages. 

The company also launched a sensitive content warning system and an unsubscribe button to block unwanted spam, following its announcement in October of last year about bolstering protections against abusive messaging practices. With growing concerns about phishing, identity theft, and messaging fraud, the QR code feature provides a more user-friendly safeguard. By reducing friction in the verification process, Google increases the likelihood that more people will adopt it as part of their everyday communication. 

While there is no official release date, the company is expected to roll out this security enhancement before the end of the year, continuing its push to position Google Messages as a secure and competitive alternative in the messaging app market.

Google to Confirm Identity of Every Android App Developer

 







Google announced a new step to make Android apps safer: starting next year, developers who distribute apps to certified Android phones and tablets, even outside Google Play, will need to verify their legal identity. The change ties every app on certified devices to a named developer account, while keeping Android’s ability to run apps from other stores or direct downloads intact. 

What this means for everyday users and small developers is straightforward. If you download an app from a website or a third-party store, the app will now be linked to a developer who has provided a legal name, address, email and phone number. Google says hobbyists and students will have a lighter account option, but many independent creators may choose to register as a business to protect personal privacy. Certified devices are the ones that ship with Google services and pass Google’s compatibility tests; devices that do not include Google Play services may follow different rules. 

Google’s stated reason is security. The company reported that apps installed from the open internet are far more likely to contain malware than apps on the Play Store, and it says those risks come mainly from people hiding behind anonymous developer identities. By requiring identity verification, Google intends to make it harder for repeat offenders to publish harmful apps and to make malicious actors easier to track. 

The rollout is phased so developers and device makers can prepare. Early access invitations begin in October 2025, verification opens to all developers in March 2026, and the rules take effect for certified devices in Brazil, Indonesia, Singapore and Thailand in September 2026. Google plans a wider global rollout in 2027. If you are a developer, review Google’s new developer pages and plan to verify your account well before your target markets enforce the rule. 

A similar compliance pattern already exists in some places. For example, Apple requires developers who distribute apps in the European Union to provide a “trader status” and contact details to meet the EU Digital Services Act. These kinds of rules aim to increase accountability, but they also raise questions about privacy, the costs for small creators, and how “open” mobile platforms should remain. Both companies are moving toward tighter oversight of app distribution, with the goal of making digital marketplaces safer and more accountable.

This change marks one of the most significant shifts in Android’s open ecosystem. While users will still have the freedom to install apps from multiple sources, developers will now be held accountable for the software they release. For users, it could mean greater protection against scams and malicious apps. For developers, especially smaller ones, it signals a new balance between maintaining privacy and ensuring trust in the Android platform.


How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

Tech Giant Google Introduces an Open-Source AI Agent to Automate Coding Activities

 

Google has launched Gemini CLI GitHub Actions, an open-source AI agent that automates routine coding tasks directly within GitHub repositories. This tool, now in beta and available globally, acts as an AI coding teammate that works both autonomously and on-demand to handle repetitive development workflows.

Key features

The Gemini CLI GitHub Actions is triggered by repository events such as new issues or pull requests, working asynchronously to triage problems, review code, and assist developers. Developers can directly interact with the agent by tagging @gemini-cli in issues or pull requests and assigning specific tasks like writing tests, implementing changes, or fixing bugs. The tool ships with three default intelligent workflows:

  • Issue triage and auto-labeling 
  • Accelerated pull request reviews
  • On-demand collaboration for targeted task delegation 

Built on Google's earlier Gemini CLI tool, the GitHub Actions version extends AI assistance from individual terminals to collaborative team environments. The agent provides powerful AI capabilities including code understanding, file manipulation, command execution, and dynamic troubleshooting. 

For individual developers, Google offers generous free usage limits of 60 model requests per minute and 1,000 requests per day at no charge when using a personal Google account. The tool integrates with Google's Gemini Code Assist, giving developers access to Gemini 2.5 Pro and its massive 1 million token context window.

The platform prioritizes security with credential-less authentication through Google Cloud's Workload Identity Federation, eliminating the need for long-lived API keys. Additional security measures include granular control with command allowlisting and complete transparency through OpenTelemetry integration for real-time monitoring.

Market positioning 

This launch represents Google's broader push into open-source AI development tools, positioning the agent as a direct competitor to GitHub Copilot and other AI coding assistants. Unlike traditional coding assistants that primarily suggest code, Gemini CLI GitHub Actions actively automates core developer workflows and can push commits autonomously.

The tool is part of Google's wider ecosystem of AI agents, following the company's release of other tools like the Agent2Agent Protocol for inter-agent communication and the "Big Sleep" security vulnerability detection system. 

The developer community has also responded enthusiastically to these releases, with thousands of developers already utilizing the tools during beta phases. During Jules' testing period alone, users completed tens of thousands of tasks and contributed over 140,000 public code improvements, demonstrating the practical value and adoption potential of these AI-powered development tools.

Google Confirms Data Breach in Salesforce System Linked to Known Hacking Group

 



Google has admitted that some of its customer data was stolen after hackers managed to break into one of its Salesforce databases.

The company revealed the incident in a blog post on Tuesday, explaining that the affected database stored contact details and notes about small and medium-sized business clients. The hackers, a group known online as ShinyHunters and officially tracked as UNC6040, were able to access the system briefly before Google’s security team shut them out.

Google stressed that the stolen information was limited to “basic and mostly public” details, such as business names, phone numbers, and email addresses. It did not share how many customers were affected, and a company spokesperson declined to answer further questions, including whether any ransom demand had been made.

ShinyHunters is notorious for breaking into large organizations’ cloud systems. In this case, Google says the group used voice phishing, calling employees and tricking them into granting system access — to target its Salesforce environment. Similar breaches have recently hit other companies using Salesforce, including Cisco, Qantas, and Pandora.

While Google believes the breach’s immediate impact will be minimal, cybersecurity experts warn there may be longer-term risks. Ben McCarthy, a lead security engineer at Immersive, pointed out that even simple personal details, once in criminal hands, can be exploited for scams and phishing attacks. Unlike passwords, names, dates of birth, and email addresses cannot be changed.

Google says it detected and stopped the intrusion before all data could be removed. In fact, the hackers only managed to take a small portion of the targeted database. Earlier this year, without naming itself as the victim, Google had warned of a similar case where a threat actor retrieved only about 10% of data before being cut off.

Reports suggest the attackers may now be preparing to publish the stolen information on a data leak site, a tactic often used to pressure companies into paying ransoms. ShinyHunters has been linked to other criminal networks, including The Com, a group known for hacking, extortion, and sometimes even violent threats.

Adding to the uncertainty, the hackers themselves have hinted they might leak the data outright instead of trying to negotiate with Google. If that happens, affected business contacts could face targeted phishing campaigns or other cyber threats.

For now, Google maintains that its investigation is ongoing and says it is working to ensure no further data is at risk. Customers are advised to stay alert for suspicious calls, emails, or messages claiming to be from Google or related business partners.

Market Trends Reveal Urgent Emerging Cybersecurity Requirements

 


During an era of unprecedented digital acceleration and hyperconnectivity, cybersecurity is no longer the sole responsibility of IT departments — it has now become a crucial strategic pillar for businesses of all sizes in an age of hyperconnectivity. 

Recent market trends are signalling an urgent need for a recalibration of cybersecurity priorities, as sophisticated cyber threats are on the rise, regulations are being tightened, and cloud-native technologies are on the rise. Increasingly, businesses and governments are realising that security is no longer merely a technical protection, but rather a foundational component of trust, resilience, and long-term growth. 

There is a growing need in the cybersecurity market for proactive, adaptable, and intelligence-driven defences as a result of the evolving threat landscape and expanding attack surfaces. There is no doubt that the market is speaking louder through investment shifts, vendor realignment, and customer demand, which is why modern cybersecurity must move in lockstep with innovation — otherwise it may turn out to be a costly vulnerability. 

Increasingly, organisations are finding that they are having trouble coping with the speed at which technological innovation and business transformation are taking place. Throughout the year 2025, chief information security officers (CISOs) will be faced with a critical situation where they must defend their organisations not only from evolving threats but also demonstrate that their security programs can be of tangible business value. 

Based on emerging insights, cybersecurity leaders are increasingly focused on ensuring that resilience is embedded at all levels — organisational, team-based, and individual — as a means of maintaining performance and operational continuity when adversity occurs. Based on recent industry trends, nine core capabilities seem to be the most important ones to address this mandate, ranging from how organisations can foster cross-functional collaborations and prevent analyst burnout, as well as how they should ensure teams are educated, aligned, and flexible. 

Keeping a balance between enabling digital transformation and maintaining cyber resilience has become one of the most important challenges of the modern security mandate. If organisations succeed in this endeavour, resilience must be built into their cybersecurity strategy from the beginning, not just as an afterthought. 

Threat actors have evolved from ideology-driven disruptions to monetisation-focused attacks in the age of cybercrime, which has grown into a multi-trillion-dollar industry. They have moved from spam and botnets to crypto mining and now ransomware-as-a-service. In light of the rapid increase in threat sophistication, organisations are being forced to rethink traditional cybersecurity paradigms in an attempt to stay competitive. 

A Chief Information Security Officer (CISO), an IT security leader, or a Managed Service Provider (MSP) who is starting a new role needs to be clear about the objective within the first 100 days of taking on a new role. In order to prevent as many attacks as possible, create friction for cybercriminals, and maintain internal alignment without disrupting IT operations, the most effective method has been to start with prevention. 

One of the most significant characteristics of modern attacks is that up to 90% of them take advantage of macros in Office to deliver remote access tools or malicious payloads. Disabling these macros, often with minimal disruption to business, can reduce exposure to these threats immediately. It is also becoming more common for organisations to adopt applications allowlisting to only allow explicitly approved applications, to block not only malware but also abused legitimate tools, such as TeamViewer and GoToAssist, automatically. 

A behavioural-level control like RingfencingTM also adds a layer of protection to this, preventing allowed applications from executing unauthorised actions and mitigating exploit-based threats such as Follina through the use of behaviour-level controls such as RingfencingTM. Collectively, these proactive controls reflect an important shift towards threat prevention and operational resilience as well. 

In the face of the emergence of generative AI that is deeply embedded within enterprise workflows, a new frontier of cybersecurity has emerged — one that extends well beyond conventional systems into the interaction between employees and artificial intelligence models. What was once considered speculative risks is now becoming a matter of urgency for organisations. 

In recent years, organisations have begun to recognise how important it is to secure how employees interact with artificial intelligence services from both external and internal sources, and have implemented a growing number of solutions designed to monitor and prompt activity, assess data sensitivity, and enforce usage policies. 

In order to maintain regulatory compliance in increasingly AI-aided environments, these controls are crucial for protecting proprietary information as well as for maintaining regulatory compliance. It is also crucial to secure the AI systems that organisations build, including the training datasets they use, the model outputs they use, and the decision logic they use, as well as the systems that they build. Emerging threats, such as prompt injection attacks and model manipulation, emphasise the need for visibility and control tailored specifically to artificial intelligence. 

Due to the impact of AI applications on security, a new class of AI application security tools has been developed, which leads to the establishment of AI system protection as a core discipline of cybersecurity, and raises it to the same level as the security of traditional infrastructures. Increasingly, organisations are adapting to an increasingly perimeterless digital environment, making the need to strengthen basic security controls non-negotiable. 

The multi-factor authentication (MFA) approach is at the forefront of remote access defence as it offers the ability to secure accounts spanning Microsoft 365, Google Workspace, domain registrars, and remote administration tools. MFA offers these accounts a crucial level of security. The use of multi-factor authentication reduces the likelihood that unauthorised access could occur even if credentials have been compromised. It is also vital that least-privilege principles be enforced. 

Despite the fact that attackers can easily install ransomware without administrative privileges, stripping local admin privileges prevents them from disabling security controls and escalating privileges. It has been recommended that users should be given elevated access to specific applications through dedicated tools rather than being given it to an entire group of users. 

In regard to data security, the use of full-disk encryption, such as BitLocker, is essential for preventing unauthorised access to virtual hard disks and tampering. As well as reducing exposure further, the use of granular permissions to access data is also crucial, ensuring that only information pertinent to their function is accessed by users and applications. 

As an example, it is important to limit tools like SSH clients to log files and restrict sensitive financial data to financial roles that do not have access to it. In addition, USB devices should be blocked by default, with a narrowly defined exception for encrypted, sanctioned drives, since they are a common vector for malware and data theft. 

The ability to monitor file activity in real time across endpoints, cloud platforms like OneDrive, and removable media has become increasingly important to the success of any comprehensive security program. This visibility can assist in proactive monitoring and enhance incident response by providing a detailed understanding of data interactions, as well as improving incident response. 

There is a strong possibility that the cybersecurity landscape will become even more complex in the future as digital ecosystems expand, adversaries refine their tactics, and companies pursue accelerated innovation, thereby increasing the complexity of the landscape. As a response, security leaders need to go beyond conventional defensive approaches and create a culture of vigilance, accountability, and adaptability that extends across entire organisations. 

Organisations will need to invest in specialised talent, cross-functional collaboration, and continuous security validation in order to deal with the convergence of IT, AI, cloud, and operational technologies. As well, with a growing number of regulatory scrutiny and stakeholder expectations, cybersecurity is now measured not just by its capability to block threats, but also by how it enables secure growth, safeguards the reputation of its users, and ensures that digital trust is maintained.

A cybersecurity strategy that integrates security seamlessly into business objectives rather than as a barrier will provide organizations with the best chance of navigating the next wave of risk and resilience in an increasingly volatile threat environment by integrating it seamlessly into business objectives. In 2025 and beyond, cybersecurity leadership will be defined by staying proactive, intelligent, and resilient as market forces continue to change the landscape of risk.

Fake Dating Apps Target Users in a New Appstore Phishing Campaign

Fake Dating Apps Target Users in a New Appstore Phishing Campaign

Malicious dating apps are stealing user information

When we download any app on our smartphones, we often don't realize that what appears harmless on the surface can be a malicious app designed to attack our device with malware. What makes this campaign different is that it poses as a utility app and uses malicious dating apps, file-sharing apps, and car service platforms. 

When a victim installs these apps on their device, the apps deploy an info-stealing malware that steals personal data. Threat actors behind the campaign go a step further by exposing victims’ information if their demands are not met.

iOS and Android users are at risk

As anyone might have shared a link to any malicious domains that host these fake apps, Android and iOS users worldwide can be impacted. Experts advise users to exercise caution when installing apps through app stores and to delete those that seem suspicious or are not used frequently. 

Zimperium’s security researchers have dubbed the new campaign “SarangTrap,” which lures potential targets into opening phishing sites. These sites are made to mimic famous brands and app stores, which makes the campaign look real and tricks users into downloading these malicious apps. 

How does the campaign work?

After installation, the apps prompt users to give permissions for proper work. In dating apps, users are asked to give a valid invitation code. When a user enters the code, it is sent to a hacker-controlled server for verification, and later requests are made to get sensitive information, which is then used to deploy malware on a device. This helps to hide the malware from antivirus software and other security checks. The apps then show their true nature; they may look real in the beginning, but they don’t contain any dating features at all.

How to stay safe from fake apps

Avoid installing and sideloading apps from unknown websites and sources. If you are redirected to a website to install an app instead of the official app store, you should immediately avoid the app.

When installing new apps on your device, pay attention to the permissions they request when you open them. While it is normal for a text messaging app to request access to your texts, it is unusual for a dating app to do the same. If you find any permission requests odd, it is a major sign that the app may be malicious.

Experts also advise users to limit the number of apps they install on their phones because even authentic apps can be infected with malicious code when there are too many apps installed on your device.

Hackers Exploit End-of-Life SonicWall Devices Using Overstep Malware and Possible Zero-Day

 

Cybersecurity experts from Google’s Threat Intelligence Group (GTIG) have uncovered a series of attacks targeting outdated SonicWall Secure Mobile Access (SMA) devices, which are widely used to manage secure remote access in enterprise environments. 

These appliances, although no longer supported with updates, remain in operation at many organizations, making them attractive to cybercriminals. The hacking group behind these intrusions has been named UNC6148 by Google. Despite being end-of-life, the devices still sit on the edge of sensitive networks, and their continued use has led to increased risk exposure. 

GTIG is urging all organizations that rely on these SMA appliances to examine them for signs of compromise. They recommend that firms collect complete disk images for forensic analysis, as the attackers are believed to be using rootkit-level tools to hide their tracks, potentially tampering with system logs. Assistance from SonicWall may be necessary for acquiring these disk images from physical devices. There is currently limited clarity around the technical specifics of these breaches. 

The attackers are leveraging leaked administrator credentials to gain access, though it remains unknown how those credentials were originally obtained. It’s also unclear what software vulnerabilities are being exploited to establish deeper control. One major obstacle to understanding the attacks is a custom backdoor malware called Overstep, which is capable of selectively deleting system logs to obscure its presence and activity. 

Security researchers believe the attackers might be using a zero-day vulnerability, or possibly exploiting known flaws like CVE-2021-20038 (a memory corruption bug enabling remote code execution), CVE-2024-38475 (a path traversal issue in Apache that exposes sensitive database files), or CVE-2021-20035 and CVE-2021-20039 (authenticated RCE vulnerabilities previously seen in the wild). There’s also mention of CVE-2025-32819, which could allow credential reset attacks through file deletion. 

GTIG, along with Mandiant and SonicWall’s internal response team, has not confirmed exactly how the attackers managed to deploy a reverse shell—something that should not be technically possible under normal device configurations. This shell provides a web-based interface that facilitates the installation of Overstep and potentially gives attackers full control over the compromised appliance. 

The motivations behind these breaches are still unclear. Since Overstep deletes key logs, detecting an infection is particularly difficult. However, Google has shared indicators of compromise to help organizations determine if they have been affected. Security teams are strongly advised to investigate the presence of these indicators and consider retiring unsupported hardware from critical infrastructure as part of a proactive defense strategy.

Google Gemini Exploit Enables Covert Delivery of Phishing Content

 


An AI-powered automation system in professional environments, such as Google Gemini for Workspace, is vulnerable to a new security flaw. Using Google’s advanced large language model (LLM) integration within its ecosystem, Gemini enables the use of artificial intelligence (AI) directly with a wide range of user tools, including Gmail, to simplify workplace tasks. 

A key feature of the app is the ability to request concise summaries of emails, which are intended to save users time and prevent them from becoming fatigued in their inboxes by reducing the amount of time they spend in it. Security researchers have, however identified a significant flaw in this feature which appears to be so helpful. 

As Mozilla bug bounty experts pointed out, malicious actors can take advantage of the trust users place in Gemini's automated responses by manipulating email content so that the AI is misled into creating misleading summaries by manipulating the content. As a result of the fact that Gemini operates within Google's trusted environment, users are likely to accept its interpretations without question, giving hackers a prime opportunity. This finding highlights what is becoming increasingly apparent in the cybersecurity landscape: when powerful artificial intelligence tools are embedded within widely used platforms, even minor vulnerabilities can be exploited by sophisticated social engineers. 

It is the vulnerability at the root of this problem that Gemini can generate e-mail summaries that seem legitimate but can be manipulated so as to include deceptive or malicious content without having to rely on conventional red flags, such as suspicious links or file attachments, to detect it. 

An attack can be embedded within an email body as an indirect prompt injection by attackers, according to cybersecurity researchers. When Gemini's language model interprets these hidden instructions during thesummarisationn process, it causes the AI to unintentionally include misleading messages in the summary that it delivers to the user, unknowingly. 

As an example, a summary can falsely inform the recipient that there has been a problem with their account, advising them to act right away, and subtly direct them to a phishing site that appears to be reliable and trustworthy. 

While prompt injection attacks on LLMs have been documented since the year 2024, and despite the implementation of numerous safeguards by developers to prevent these manipulations from occurring, this method continues to be effective even today. This tactic is persisting because of the growing sophistication of threat actors as well as the challenge of fully securing generative artificial intelligence systems that are embedded in critical communication platforms. 

There is also a need to be more vigilant when developing artificial intelligence and making sure users are aware of it, as traditional cybersecurity cues may no longer apply to these AI-driven environments. In order to find these vulnerabilities, a cybersecurity researcher, Marco Figueroa, identified them and responsibly disclosed them through Mozilla's 0Din bug bounty program, which specialises in finding vulnerabilities in generative artificial intelligence. 

There is a clever but deeply concerning method of exploitation demonstrated in Figueroa's proof-of-concept. The attack begins with a seemingly harmless e-mail sent to the intended victim that appears harmless at first glance. A phishing prompt disguised in white font on a white background is hidden in a secondary, malicious component of the message, which conceals benign information so as to avoid suspicion of the message.

When viewed in a standard email client, it is completely invisible to the human eye and is hidden behind benign content. The malicious message is strategically embedded within custom tags, which are not standard HTML elements, but which appear to be interpreted in a privileged manner by Gemini's summarization function, as they are not standard HTML elements. 

By activating the "Summarise this email" feature in Google Gemini, a machine learning algorithm takes into account both visible and hidden text within the email. Due to the way Gemini handles input wrapped in tags, it prioritises and reproduces the hidden message verbatim within the summary, placing it at the end of the response, as it should. 

In consequence, what appears to be a trustworthy, AI-generated summary now contains manipulative instructions which can be used to entice people to click on phishing websites, effectively bypassing traditional security measures. A demonstration of the ease with which generative AI tools can be exploited when trust in the system is assumed is demonstrated in this attack method, and it further demonstrates the importance of robust sanitisation protocols as well as input validation protocols for prompt sanitisation. 

It is alarming how effectively the exploitation technique is despite its technical simplicity. An invisible formatting technique enables the embedding of hidden prompts into an email, leveraging Google Gemini's interpretation of raw content to capitalise on its ability to comprehend the content. In the documented attack, a malicious actor inserts a command inside a span element with font-size: 0 and colour: white, effectively rendering the content invisible to the recipient who is viewing the message in a standard email client. 

Unlike a browser, which renders only what can be seen by the user, Gemini process the entire raw HTML document, including all hidden elements. As a consequence, Gemini's summary feature, which is available to the user when they invoke it, interprets and includes the hidden instruction as though it were part of the legitimate message in the generated summary.

It is important to note that this flaw has significant implications for services that operate at scale, as well as for those who use them regularly. A summary tool that is capable of analysing HTML inline styles, such as font-size:0, colour: white, and opacity:0, should be instructed to ignore or neutralise these styles, which render text visually hidden. 

The development team can also integrate guard prompts into LLM behaviour, instructing models not to ignore invisible content, for example. In terms of user education, he recommends that organisations make sure their employees are aware that AI-generated summaries, including those generated by Gemini, serve only as informational aids and should not be treated as authoritative sources when it comes to urgent or security-related instructions. 

A vulnerability of this magnitude has been discovered at a crucial time, as more and more tech companies are increasingly integrating LLMs into their platforms to automate productivity. In contrast to previous models, where users would manually trigger AI tools, the new paradigm is a shift to automated AI tools that will run in the background instead.

It is for this reason that Google introduced the Gemini side panel last year in Gmail, Docs, Sheets, and other Workspace apps to help users summarise and create content within their workflow seamlessly. A noteworthy change in Gmail's functionality is that on May 29, Google enabled automatic email summarisation for users whose organisations have enabled smart features across Gmail, Chat, Meet, and other Workspace tools by activating a default personalisation setting. 

As generative artificial intelligence becomes increasingly integrated into everyday communication systems, robust security protocols will become increasingly important as this move enhances convenience. This vulnerability exposes an issue of fundamental inadequacy in the current guardrails used for LLM, primarily focusing on filtering or flagging content that is visible to the user. 

A significant number of AI models, including the Google Gemini AI model, continue to use raw HTML markup, making them susceptible to obfuscation techniques such as zero-font text and white-on-white formatting. Despite being invisible to users, these techniques are still considered valid input to the model by the model-thereby creating a blind spot for attackers that can easily be exploited by attackers. 

Mozilla's 0Din program classified the issue as a moderately serious vulnerability by Mozilla, and said that the flaw could be exploited by hackers to harvest credential information, use vishing (voice-phishing), and perform other social engineering attacks by exploiting trust in artificial intelligence-generated content in order to gain access to information. 

In addition to the output filter, a post-processing filter can also function as an additional safeguard by inspecting artificial intelligence-generated summaries for signs of manipulation, such as embedded URLs, telephone numbers, or language that implies urgency, flagging these suspicious summaries for human review. This layered defence strategy is especially vital in environments where AI operates at scale. 

As well as protecting against individual attacks, there is also a broader supply chain risk to consider. It is clear that mass communication systems, such as CRM platforms, newsletters, and automated support ticketing services, are potential vectors for injection, according to researcher Marco Figueroa. There is a possibility that a single compromised account on any of these SaaS systems can be used to spread hidden prompt injections across thousands of recipients, turning otherwise legitimate SaaS services into large-scale phishing attacks. 

There is an apt term to describe "prompt injections", which have become the new email macros according to the research. The exploit exhibited by Phishing for Gemini significantly underscores a fundamental truth: even apparently minor, invisible code can be weaponised and used for malicious purposes. 

As long as language models don't contain robust context isolation that ensures third-party content is sandboxed or subjected to appropriate scrutiny, each piece of input should be viewed as potentially executable code, regardless of whether it is encoded correctly or not. In light of this, security teams should start to understand that AI systems are no longer just productivity tools, but rather components of a threat surface that need to be actively monitored, measured, and contained. 

The risk landscape of today does not allow organisations to blindly trust AI output. Because generative artificial intelligence is being integrated into enterprise ecosystems in ever greater numbers, organisations must reevaluate their security frameworks in order to address the emerging risks that arise from machine learning systems in the future. 

Considering the findings regarding Google Gemini, it is urgent to consider AI-generated outputs as potential threat vectors, as they are capable of being manipulated in subtle but impactful ways. A security protocol based on AI needs to be implemented by enterprises to prevent such exploitations from occurring, robust validation mechanisms for automated content need to be established, and a collaborative oversight system between development, IT, and security teams must be established to ensure this doesn't happen again. 

Moreover, it is imperative that AI-driven tools, especially those embedded within communication workflows, be made accessible to end users so that they can understand their capabilities and limitations. In light of the increasing ease and pervasiveness of automation in digital operations, it will become increasingly essential to maintain a culture of informed vigilance across all layers of the organisation to maintain trust and integrity.