Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Google+. Show all posts

How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

Tech Giant Google Introduces an Open-Source AI Agent to Automate Coding Activities

 

Google has launched Gemini CLI GitHub Actions, an open-source AI agent that automates routine coding tasks directly within GitHub repositories. This tool, now in beta and available globally, acts as an AI coding teammate that works both autonomously and on-demand to handle repetitive development workflows.

Key features

The Gemini CLI GitHub Actions is triggered by repository events such as new issues or pull requests, working asynchronously to triage problems, review code, and assist developers. Developers can directly interact with the agent by tagging @gemini-cli in issues or pull requests and assigning specific tasks like writing tests, implementing changes, or fixing bugs. The tool ships with three default intelligent workflows:

  • Issue triage and auto-labeling 
  • Accelerated pull request reviews
  • On-demand collaboration for targeted task delegation 

Built on Google's earlier Gemini CLI tool, the GitHub Actions version extends AI assistance from individual terminals to collaborative team environments. The agent provides powerful AI capabilities including code understanding, file manipulation, command execution, and dynamic troubleshooting. 

For individual developers, Google offers generous free usage limits of 60 model requests per minute and 1,000 requests per day at no charge when using a personal Google account. The tool integrates with Google's Gemini Code Assist, giving developers access to Gemini 2.5 Pro and its massive 1 million token context window.

The platform prioritizes security with credential-less authentication through Google Cloud's Workload Identity Federation, eliminating the need for long-lived API keys. Additional security measures include granular control with command allowlisting and complete transparency through OpenTelemetry integration for real-time monitoring.

Market positioning 

This launch represents Google's broader push into open-source AI development tools, positioning the agent as a direct competitor to GitHub Copilot and other AI coding assistants. Unlike traditional coding assistants that primarily suggest code, Gemini CLI GitHub Actions actively automates core developer workflows and can push commits autonomously.

The tool is part of Google's wider ecosystem of AI agents, following the company's release of other tools like the Agent2Agent Protocol for inter-agent communication and the "Big Sleep" security vulnerability detection system. 

The developer community has also responded enthusiastically to these releases, with thousands of developers already utilizing the tools during beta phases. During Jules' testing period alone, users completed tens of thousands of tasks and contributed over 140,000 public code improvements, demonstrating the practical value and adoption potential of these AI-powered development tools.

Google Confirms Data Breach in Salesforce System Linked to Known Hacking Group

 



Google has admitted that some of its customer data was stolen after hackers managed to break into one of its Salesforce databases.

The company revealed the incident in a blog post on Tuesday, explaining that the affected database stored contact details and notes about small and medium-sized business clients. The hackers, a group known online as ShinyHunters and officially tracked as UNC6040, were able to access the system briefly before Google’s security team shut them out.

Google stressed that the stolen information was limited to “basic and mostly public” details, such as business names, phone numbers, and email addresses. It did not share how many customers were affected, and a company spokesperson declined to answer further questions, including whether any ransom demand had been made.

ShinyHunters is notorious for breaking into large organizations’ cloud systems. In this case, Google says the group used voice phishing, calling employees and tricking them into granting system access — to target its Salesforce environment. Similar breaches have recently hit other companies using Salesforce, including Cisco, Qantas, and Pandora.

While Google believes the breach’s immediate impact will be minimal, cybersecurity experts warn there may be longer-term risks. Ben McCarthy, a lead security engineer at Immersive, pointed out that even simple personal details, once in criminal hands, can be exploited for scams and phishing attacks. Unlike passwords, names, dates of birth, and email addresses cannot be changed.

Google says it detected and stopped the intrusion before all data could be removed. In fact, the hackers only managed to take a small portion of the targeted database. Earlier this year, without naming itself as the victim, Google had warned of a similar case where a threat actor retrieved only about 10% of data before being cut off.

Reports suggest the attackers may now be preparing to publish the stolen information on a data leak site, a tactic often used to pressure companies into paying ransoms. ShinyHunters has been linked to other criminal networks, including The Com, a group known for hacking, extortion, and sometimes even violent threats.

Adding to the uncertainty, the hackers themselves have hinted they might leak the data outright instead of trying to negotiate with Google. If that happens, affected business contacts could face targeted phishing campaigns or other cyber threats.

For now, Google maintains that its investigation is ongoing and says it is working to ensure no further data is at risk. Customers are advised to stay alert for suspicious calls, emails, or messages claiming to be from Google or related business partners.

Market Trends Reveal Urgent Emerging Cybersecurity Requirements

 


During an era of unprecedented digital acceleration and hyperconnectivity, cybersecurity is no longer the sole responsibility of IT departments — it has now become a crucial strategic pillar for businesses of all sizes in an age of hyperconnectivity. 

Recent market trends are signalling an urgent need for a recalibration of cybersecurity priorities, as sophisticated cyber threats are on the rise, regulations are being tightened, and cloud-native technologies are on the rise. Increasingly, businesses and governments are realising that security is no longer merely a technical protection, but rather a foundational component of trust, resilience, and long-term growth. 

There is a growing need in the cybersecurity market for proactive, adaptable, and intelligence-driven defences as a result of the evolving threat landscape and expanding attack surfaces. There is no doubt that the market is speaking louder through investment shifts, vendor realignment, and customer demand, which is why modern cybersecurity must move in lockstep with innovation — otherwise it may turn out to be a costly vulnerability. 

Increasingly, organisations are finding that they are having trouble coping with the speed at which technological innovation and business transformation are taking place. Throughout the year 2025, chief information security officers (CISOs) will be faced with a critical situation where they must defend their organisations not only from evolving threats but also demonstrate that their security programs can be of tangible business value. 

Based on emerging insights, cybersecurity leaders are increasingly focused on ensuring that resilience is embedded at all levels — organisational, team-based, and individual — as a means of maintaining performance and operational continuity when adversity occurs. Based on recent industry trends, nine core capabilities seem to be the most important ones to address this mandate, ranging from how organisations can foster cross-functional collaborations and prevent analyst burnout, as well as how they should ensure teams are educated, aligned, and flexible. 

Keeping a balance between enabling digital transformation and maintaining cyber resilience has become one of the most important challenges of the modern security mandate. If organisations succeed in this endeavour, resilience must be built into their cybersecurity strategy from the beginning, not just as an afterthought. 

Threat actors have evolved from ideology-driven disruptions to monetisation-focused attacks in the age of cybercrime, which has grown into a multi-trillion-dollar industry. They have moved from spam and botnets to crypto mining and now ransomware-as-a-service. In light of the rapid increase in threat sophistication, organisations are being forced to rethink traditional cybersecurity paradigms in an attempt to stay competitive. 

A Chief Information Security Officer (CISO), an IT security leader, or a Managed Service Provider (MSP) who is starting a new role needs to be clear about the objective within the first 100 days of taking on a new role. In order to prevent as many attacks as possible, create friction for cybercriminals, and maintain internal alignment without disrupting IT operations, the most effective method has been to start with prevention. 

One of the most significant characteristics of modern attacks is that up to 90% of them take advantage of macros in Office to deliver remote access tools or malicious payloads. Disabling these macros, often with minimal disruption to business, can reduce exposure to these threats immediately. It is also becoming more common for organisations to adopt applications allowlisting to only allow explicitly approved applications, to block not only malware but also abused legitimate tools, such as TeamViewer and GoToAssist, automatically. 

A behavioural-level control like RingfencingTM also adds a layer of protection to this, preventing allowed applications from executing unauthorised actions and mitigating exploit-based threats such as Follina through the use of behaviour-level controls such as RingfencingTM. Collectively, these proactive controls reflect an important shift towards threat prevention and operational resilience as well. 

In the face of the emergence of generative AI that is deeply embedded within enterprise workflows, a new frontier of cybersecurity has emerged — one that extends well beyond conventional systems into the interaction between employees and artificial intelligence models. What was once considered speculative risks is now becoming a matter of urgency for organisations. 

In recent years, organisations have begun to recognise how important it is to secure how employees interact with artificial intelligence services from both external and internal sources, and have implemented a growing number of solutions designed to monitor and prompt activity, assess data sensitivity, and enforce usage policies. 

In order to maintain regulatory compliance in increasingly AI-aided environments, these controls are crucial for protecting proprietary information as well as for maintaining regulatory compliance. It is also crucial to secure the AI systems that organisations build, including the training datasets they use, the model outputs they use, and the decision logic they use, as well as the systems that they build. Emerging threats, such as prompt injection attacks and model manipulation, emphasise the need for visibility and control tailored specifically to artificial intelligence. 

Due to the impact of AI applications on security, a new class of AI application security tools has been developed, which leads to the establishment of AI system protection as a core discipline of cybersecurity, and raises it to the same level as the security of traditional infrastructures. Increasingly, organisations are adapting to an increasingly perimeterless digital environment, making the need to strengthen basic security controls non-negotiable. 

The multi-factor authentication (MFA) approach is at the forefront of remote access defence as it offers the ability to secure accounts spanning Microsoft 365, Google Workspace, domain registrars, and remote administration tools. MFA offers these accounts a crucial level of security. The use of multi-factor authentication reduces the likelihood that unauthorised access could occur even if credentials have been compromised. It is also vital that least-privilege principles be enforced. 

Despite the fact that attackers can easily install ransomware without administrative privileges, stripping local admin privileges prevents them from disabling security controls and escalating privileges. It has been recommended that users should be given elevated access to specific applications through dedicated tools rather than being given it to an entire group of users. 

In regard to data security, the use of full-disk encryption, such as BitLocker, is essential for preventing unauthorised access to virtual hard disks and tampering. As well as reducing exposure further, the use of granular permissions to access data is also crucial, ensuring that only information pertinent to their function is accessed by users and applications. 

As an example, it is important to limit tools like SSH clients to log files and restrict sensitive financial data to financial roles that do not have access to it. In addition, USB devices should be blocked by default, with a narrowly defined exception for encrypted, sanctioned drives, since they are a common vector for malware and data theft. 

The ability to monitor file activity in real time across endpoints, cloud platforms like OneDrive, and removable media has become increasingly important to the success of any comprehensive security program. This visibility can assist in proactive monitoring and enhance incident response by providing a detailed understanding of data interactions, as well as improving incident response. 

There is a strong possibility that the cybersecurity landscape will become even more complex in the future as digital ecosystems expand, adversaries refine their tactics, and companies pursue accelerated innovation, thereby increasing the complexity of the landscape. As a response, security leaders need to go beyond conventional defensive approaches and create a culture of vigilance, accountability, and adaptability that extends across entire organisations. 

Organisations will need to invest in specialised talent, cross-functional collaboration, and continuous security validation in order to deal with the convergence of IT, AI, cloud, and operational technologies. As well, with a growing number of regulatory scrutiny and stakeholder expectations, cybersecurity is now measured not just by its capability to block threats, but also by how it enables secure growth, safeguards the reputation of its users, and ensures that digital trust is maintained.

A cybersecurity strategy that integrates security seamlessly into business objectives rather than as a barrier will provide organizations with the best chance of navigating the next wave of risk and resilience in an increasingly volatile threat environment by integrating it seamlessly into business objectives. In 2025 and beyond, cybersecurity leadership will be defined by staying proactive, intelligent, and resilient as market forces continue to change the landscape of risk.

Fake Dating Apps Target Users in a New Appstore Phishing Campaign

Fake Dating Apps Target Users in a New Appstore Phishing Campaign

Malicious dating apps are stealing user information

When we download any app on our smartphones, we often don't realize that what appears harmless on the surface can be a malicious app designed to attack our device with malware. What makes this campaign different is that it poses as a utility app and uses malicious dating apps, file-sharing apps, and car service platforms. 

When a victim installs these apps on their device, the apps deploy an info-stealing malware that steals personal data. Threat actors behind the campaign go a step further by exposing victims’ information if their demands are not met.

iOS and Android users are at risk

As anyone might have shared a link to any malicious domains that host these fake apps, Android and iOS users worldwide can be impacted. Experts advise users to exercise caution when installing apps through app stores and to delete those that seem suspicious or are not used frequently. 

Zimperium’s security researchers have dubbed the new campaign “SarangTrap,” which lures potential targets into opening phishing sites. These sites are made to mimic famous brands and app stores, which makes the campaign look real and tricks users into downloading these malicious apps. 

How does the campaign work?

After installation, the apps prompt users to give permissions for proper work. In dating apps, users are asked to give a valid invitation code. When a user enters the code, it is sent to a hacker-controlled server for verification, and later requests are made to get sensitive information, which is then used to deploy malware on a device. This helps to hide the malware from antivirus software and other security checks. The apps then show their true nature; they may look real in the beginning, but they don’t contain any dating features at all.

How to stay safe from fake apps

Avoid installing and sideloading apps from unknown websites and sources. If you are redirected to a website to install an app instead of the official app store, you should immediately avoid the app.

When installing new apps on your device, pay attention to the permissions they request when you open them. While it is normal for a text messaging app to request access to your texts, it is unusual for a dating app to do the same. If you find any permission requests odd, it is a major sign that the app may be malicious.

Experts also advise users to limit the number of apps they install on their phones because even authentic apps can be infected with malicious code when there are too many apps installed on your device.

Hackers Exploit End-of-Life SonicWall Devices Using Overstep Malware and Possible Zero-Day

 

Cybersecurity experts from Google’s Threat Intelligence Group (GTIG) have uncovered a series of attacks targeting outdated SonicWall Secure Mobile Access (SMA) devices, which are widely used to manage secure remote access in enterprise environments. 

These appliances, although no longer supported with updates, remain in operation at many organizations, making them attractive to cybercriminals. The hacking group behind these intrusions has been named UNC6148 by Google. Despite being end-of-life, the devices still sit on the edge of sensitive networks, and their continued use has led to increased risk exposure. 

GTIG is urging all organizations that rely on these SMA appliances to examine them for signs of compromise. They recommend that firms collect complete disk images for forensic analysis, as the attackers are believed to be using rootkit-level tools to hide their tracks, potentially tampering with system logs. Assistance from SonicWall may be necessary for acquiring these disk images from physical devices. There is currently limited clarity around the technical specifics of these breaches. 

The attackers are leveraging leaked administrator credentials to gain access, though it remains unknown how those credentials were originally obtained. It’s also unclear what software vulnerabilities are being exploited to establish deeper control. One major obstacle to understanding the attacks is a custom backdoor malware called Overstep, which is capable of selectively deleting system logs to obscure its presence and activity. 

Security researchers believe the attackers might be using a zero-day vulnerability, or possibly exploiting known flaws like CVE-2021-20038 (a memory corruption bug enabling remote code execution), CVE-2024-38475 (a path traversal issue in Apache that exposes sensitive database files), or CVE-2021-20035 and CVE-2021-20039 (authenticated RCE vulnerabilities previously seen in the wild). There’s also mention of CVE-2025-32819, which could allow credential reset attacks through file deletion. 

GTIG, along with Mandiant and SonicWall’s internal response team, has not confirmed exactly how the attackers managed to deploy a reverse shell—something that should not be technically possible under normal device configurations. This shell provides a web-based interface that facilitates the installation of Overstep and potentially gives attackers full control over the compromised appliance. 

The motivations behind these breaches are still unclear. Since Overstep deletes key logs, detecting an infection is particularly difficult. However, Google has shared indicators of compromise to help organizations determine if they have been affected. Security teams are strongly advised to investigate the presence of these indicators and consider retiring unsupported hardware from critical infrastructure as part of a proactive defense strategy.

Google Gemini Exploit Enables Covert Delivery of Phishing Content

 


An AI-powered automation system in professional environments, such as Google Gemini for Workspace, is vulnerable to a new security flaw. Using Google’s advanced large language model (LLM) integration within its ecosystem, Gemini enables the use of artificial intelligence (AI) directly with a wide range of user tools, including Gmail, to simplify workplace tasks. 

A key feature of the app is the ability to request concise summaries of emails, which are intended to save users time and prevent them from becoming fatigued in their inboxes by reducing the amount of time they spend in it. Security researchers have, however identified a significant flaw in this feature which appears to be so helpful. 

As Mozilla bug bounty experts pointed out, malicious actors can take advantage of the trust users place in Gemini's automated responses by manipulating email content so that the AI is misled into creating misleading summaries by manipulating the content. As a result of the fact that Gemini operates within Google's trusted environment, users are likely to accept its interpretations without question, giving hackers a prime opportunity. This finding highlights what is becoming increasingly apparent in the cybersecurity landscape: when powerful artificial intelligence tools are embedded within widely used platforms, even minor vulnerabilities can be exploited by sophisticated social engineers. 

It is the vulnerability at the root of this problem that Gemini can generate e-mail summaries that seem legitimate but can be manipulated so as to include deceptive or malicious content without having to rely on conventional red flags, such as suspicious links or file attachments, to detect it. 

An attack can be embedded within an email body as an indirect prompt injection by attackers, according to cybersecurity researchers. When Gemini's language model interprets these hidden instructions during thesummarisationn process, it causes the AI to unintentionally include misleading messages in the summary that it delivers to the user, unknowingly. 

As an example, a summary can falsely inform the recipient that there has been a problem with their account, advising them to act right away, and subtly direct them to a phishing site that appears to be reliable and trustworthy. 

While prompt injection attacks on LLMs have been documented since the year 2024, and despite the implementation of numerous safeguards by developers to prevent these manipulations from occurring, this method continues to be effective even today. This tactic is persisting because of the growing sophistication of threat actors as well as the challenge of fully securing generative artificial intelligence systems that are embedded in critical communication platforms. 

There is also a need to be more vigilant when developing artificial intelligence and making sure users are aware of it, as traditional cybersecurity cues may no longer apply to these AI-driven environments. In order to find these vulnerabilities, a cybersecurity researcher, Marco Figueroa, identified them and responsibly disclosed them through Mozilla's 0Din bug bounty program, which specialises in finding vulnerabilities in generative artificial intelligence. 

There is a clever but deeply concerning method of exploitation demonstrated in Figueroa's proof-of-concept. The attack begins with a seemingly harmless e-mail sent to the intended victim that appears harmless at first glance. A phishing prompt disguised in white font on a white background is hidden in a secondary, malicious component of the message, which conceals benign information so as to avoid suspicion of the message.

When viewed in a standard email client, it is completely invisible to the human eye and is hidden behind benign content. The malicious message is strategically embedded within custom tags, which are not standard HTML elements, but which appear to be interpreted in a privileged manner by Gemini's summarization function, as they are not standard HTML elements. 

By activating the "Summarise this email" feature in Google Gemini, a machine learning algorithm takes into account both visible and hidden text within the email. Due to the way Gemini handles input wrapped in tags, it prioritises and reproduces the hidden message verbatim within the summary, placing it at the end of the response, as it should. 

In consequence, what appears to be a trustworthy, AI-generated summary now contains manipulative instructions which can be used to entice people to click on phishing websites, effectively bypassing traditional security measures. A demonstration of the ease with which generative AI tools can be exploited when trust in the system is assumed is demonstrated in this attack method, and it further demonstrates the importance of robust sanitisation protocols as well as input validation protocols for prompt sanitisation. 

It is alarming how effectively the exploitation technique is despite its technical simplicity. An invisible formatting technique enables the embedding of hidden prompts into an email, leveraging Google Gemini's interpretation of raw content to capitalise on its ability to comprehend the content. In the documented attack, a malicious actor inserts a command inside a span element with font-size: 0 and colour: white, effectively rendering the content invisible to the recipient who is viewing the message in a standard email client. 

Unlike a browser, which renders only what can be seen by the user, Gemini process the entire raw HTML document, including all hidden elements. As a consequence, Gemini's summary feature, which is available to the user when they invoke it, interprets and includes the hidden instruction as though it were part of the legitimate message in the generated summary.

It is important to note that this flaw has significant implications for services that operate at scale, as well as for those who use them regularly. A summary tool that is capable of analysing HTML inline styles, such as font-size:0, colour: white, and opacity:0, should be instructed to ignore or neutralise these styles, which render text visually hidden. 

The development team can also integrate guard prompts into LLM behaviour, instructing models not to ignore invisible content, for example. In terms of user education, he recommends that organisations make sure their employees are aware that AI-generated summaries, including those generated by Gemini, serve only as informational aids and should not be treated as authoritative sources when it comes to urgent or security-related instructions. 

A vulnerability of this magnitude has been discovered at a crucial time, as more and more tech companies are increasingly integrating LLMs into their platforms to automate productivity. In contrast to previous models, where users would manually trigger AI tools, the new paradigm is a shift to automated AI tools that will run in the background instead.

It is for this reason that Google introduced the Gemini side panel last year in Gmail, Docs, Sheets, and other Workspace apps to help users summarise and create content within their workflow seamlessly. A noteworthy change in Gmail's functionality is that on May 29, Google enabled automatic email summarisation for users whose organisations have enabled smart features across Gmail, Chat, Meet, and other Workspace tools by activating a default personalisation setting. 

As generative artificial intelligence becomes increasingly integrated into everyday communication systems, robust security protocols will become increasingly important as this move enhances convenience. This vulnerability exposes an issue of fundamental inadequacy in the current guardrails used for LLM, primarily focusing on filtering or flagging content that is visible to the user. 

A significant number of AI models, including the Google Gemini AI model, continue to use raw HTML markup, making them susceptible to obfuscation techniques such as zero-font text and white-on-white formatting. Despite being invisible to users, these techniques are still considered valid input to the model by the model-thereby creating a blind spot for attackers that can easily be exploited by attackers. 

Mozilla's 0Din program classified the issue as a moderately serious vulnerability by Mozilla, and said that the flaw could be exploited by hackers to harvest credential information, use vishing (voice-phishing), and perform other social engineering attacks by exploiting trust in artificial intelligence-generated content in order to gain access to information. 

In addition to the output filter, a post-processing filter can also function as an additional safeguard by inspecting artificial intelligence-generated summaries for signs of manipulation, such as embedded URLs, telephone numbers, or language that implies urgency, flagging these suspicious summaries for human review. This layered defence strategy is especially vital in environments where AI operates at scale. 

As well as protecting against individual attacks, there is also a broader supply chain risk to consider. It is clear that mass communication systems, such as CRM platforms, newsletters, and automated support ticketing services, are potential vectors for injection, according to researcher Marco Figueroa. There is a possibility that a single compromised account on any of these SaaS systems can be used to spread hidden prompt injections across thousands of recipients, turning otherwise legitimate SaaS services into large-scale phishing attacks. 

There is an apt term to describe "prompt injections", which have become the new email macros according to the research. The exploit exhibited by Phishing for Gemini significantly underscores a fundamental truth: even apparently minor, invisible code can be weaponised and used for malicious purposes. 

As long as language models don't contain robust context isolation that ensures third-party content is sandboxed or subjected to appropriate scrutiny, each piece of input should be viewed as potentially executable code, regardless of whether it is encoded correctly or not. In light of this, security teams should start to understand that AI systems are no longer just productivity tools, but rather components of a threat surface that need to be actively monitored, measured, and contained. 

The risk landscape of today does not allow organisations to blindly trust AI output. Because generative artificial intelligence is being integrated into enterprise ecosystems in ever greater numbers, organisations must reevaluate their security frameworks in order to address the emerging risks that arise from machine learning systems in the future. 

Considering the findings regarding Google Gemini, it is urgent to consider AI-generated outputs as potential threat vectors, as they are capable of being manipulated in subtle but impactful ways. A security protocol based on AI needs to be implemented by enterprises to prevent such exploitations from occurring, robust validation mechanisms for automated content need to be established, and a collaborative oversight system between development, IT, and security teams must be established to ensure this doesn't happen again. 

Moreover, it is imperative that AI-driven tools, especially those embedded within communication workflows, be made accessible to end users so that they can understand their capabilities and limitations. In light of the increasing ease and pervasiveness of automation in digital operations, it will become increasingly essential to maintain a culture of informed vigilance across all layers of the organisation to maintain trust and integrity.

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

A security bug in a stealthy Android spyware operation, “Catwatchful,” has exposed full user databases affecting its 62,000 customers and also its app admin. The vulnerability was found by cybersecurity expert Eric Daigle reported about the spyware app’s full database of email IDs and plaintext passwords used by Catwatchful customers to access stolen data from the devices of their victims. 

Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.

The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.

About Catwatchful

Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages.  The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).

Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses. 

Rise of spyware apps

The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed. 

How was the spyware found?

Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data. 

According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch passwords, use passkeys

Microsoft and Google users, in particular, have been warned about ditching passwords for passkeys. Passwords are easy to steal and can unlock your digital life. Microsoft has been at the forefront, confirming it will delete passwords for more than a billion users. Google, too, has warned that most of its users will have to add passkeys to their accounts. 

What are passkeys?

Instead of a username and password, passkeys use our device security to log into our account. This means that there is no password to hack and no two-factor authentication codes to bypass, making it phishing-resistant.

At the same time, the Okta team warned that it found threat actors exploiting v0, an advanced GenAI tool made by Vercelopens, to create phishing websites that mimic real sign-in webpages

Okta warns users to not use passwords

A video shows how this works, raising concerns about users still using passwords to sign into their accounts, even when backed by multi-factor authentication, and “especially if that 2FA is nothing better than SMS, which is now little better than nothing at all,” according to Forbes. 

According to Okta, “This signals a new evolution in the weaponization of GenAI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts. The technology is being used to build replicas of the legitimate sign-in pages of multiple brands, including an Okta customer.”

Why are passwords not safe?

It is shocking how easy a login webpage can be mimicked. Users should not be surprised that today’s cyber criminals are exploiting and weaponizing GenAI features to advance and streamline their phishing attacks. AI in the wrong hands can have massive repercussions for the cybersecurity industry.

According to Forbes, “Gone are the days of clumsy imagery and texts and fake sign-in pages that can be detected in an instant. These latest attacks need a technical solution.”

Users are advised to add passkeys to their accounts if available and stop using passwords when signing in to their accounts. Users should also ensure that if they use passwords, they should be long and unique, and not backed up by SMS 2-factor authentication. 

New Android Feature Detects Fake Mobile Networks

 



In a critical move for mobile security, Google is preparing to roll out a new feature in Android 16 that will help protect users from fake mobile towers, also known as cell site simulators, that can be used to spy on people without their knowledge.

These deceptive towers, often referred to as stingrays or IMSI catchers, are devices that imitate real cell towers. When a smartphone connects to them, attackers can track the user’s location or intercept sensitive data like phone calls, text messages, or even the phone's unique ID numbers (such as IMEI). What makes them dangerous is that users typically have no idea their phones are connected to a fraudulent network.

Stingrays usually exploit older 2G networks, which lack strong encryption and tower authentication. Even if a person uses a modern 4G or 5G connection, their device can still switch to 2G if the signal is stronger opening the door for such attacks.

Until now, Android users had very limited options to guard against these silent threats. The most effective method was to manually turn off 2G network support—something many people aren’t aware of or don’t know how to do.

That’s changing with Android 16. According to public documentation on the Android Open Source Project, the operating system will introduce a “network security warning” feature. When activated, it will notify users if their phone connects to a mobile network that behaves suspiciously, such as trying to extract device identifiers or downgrade the connection to an unsecured one.

This feature will be accessible through the “Mobile Network Security” settings, where users can also manage 2G-related protections. However, there's a catch: most current Android phones, including Google's own Pixel models, don’t yet have the hardware required to support this function. As a result, the feature is not yet visible in settings, and it’s expected to debut on newer devices launching later this year.

Industry observers believe that this detection system might first appear on the upcoming Pixel 10, potentially making it one of the most security-focused smartphones to date.

While stingray technology is sometimes used by law enforcement agencies for surveillance under strict regulations, its misuse remains a serious privacy concern especially if such tools fall into the wrong hands.

With Android 16, Google is taking a step toward giving users more control and awareness over the security of their mobile connections. As surveillance tactics become more advanced, these kinds of features are increasingly necessary to protect personal privacy.

WhatsApp Under Fire for AI Update Disrupting Group Communication


The new artificial intelligence capability introduced by WhatsApp aims to transform the way users interact with their conversations through sophisticated artificial intelligence. It uses advanced technology from Meta AI to provide a concise summary of unread messages across individual chats as well as group chats, which is referred to as Message Summaries. 

The tool was created to help users stay informed in increasingly active chat environments by automatically compiling key points and contextual highlights, allowing them to catch up in just a few clicks without having to scroll through lengthy message histories to catch up. The company claims all summaries are generated privately, so that confidentiality can be maintained and the process of use is as simple as possible for the user. 

WhatsApp announces its intention of integrating artificial intelligence-driven solutions into its app to improve user convenience as well as reshape communication habits for its global community with this rollout, sparking both excitement and controversy as a result. Despite being announced last month, WhatsApp’s innovative Message Summaries feature has moved from pilot testing to a full-scale rollout after successfully passing pilot testing. 

Having refined the tool and collected feedback from its users, it is now considered to be stable and has been formally launched for wider use. In the initial phase, the feature is only available to US users and is restricted to the English language at this time. This indicates that WhatsApp is cautious when it comes to deploying large-scale artificial intelligence. 

Nevertheless, the platform announced plans to extend its availability to more regions at some point in the future, along with the addition of multilingual support. The phased rollout strategy emphasises that the company is focused on ensuring that the technology is reliable and user-friendly before it is extended to the vast global market. 

It is WhatsApp's intention to focus on a controlled release so as to gather more insights about users' interaction with the AI-generated conversation summaries, as well as to fine-tune the experience before expanding internationally. As a result of WhatsApp's inability to provide an option for enabling or concealing the Message Summaries feature, there has been a significant amount of discontent among users. 

Despite the fact that Meta has refused to clarify the reason regarding the lack of an opt-out mechanism or why users were not offered the opportunity to opt out of the AI integration, they have not provided any explanation so far. As concerning as the technology itself is, the lack of transparency has been regarded equally as a cause for concern by many, raising questions about the control people have over their personal communications. As a result of these limitations, some people have attempted to circumvent the chatbot by switching to a WhatsApp Business account as a response. 

In addition, several users have commented that this strategy removed the AI functionality from Meta AI, but others have noted that the characteristic blue circle, which indicates Meta AI's presence, still appeared, which exacerbated the dissatisfaction and uncertainty. 

The Meta team hasn’t confirmed whether the business-oriented version of WhatsApp will continue to be exempt from AI integration for years to come. This rollout also represents Meta’s broader goal of integrating generative AI into all its platforms, which include Facebook and Instagram, into its ecosystem. 

Towards the end of 2024, Meta AI was introduced for the first time in Facebook Messenger in the United Kingdom, followed by a gradual extension into WhatsApp as part of a unified vision to revolutionise digital interactions. However, many users have expressed their frustration with this feature because it often feels intrusive and ultimately is useless, despite these ambitions. 

The chatbot appears to activate frequently when individuals are simply searching for past conversations or locating contacts, which results in obstructions rather than streamlining the experience. According to the initial feedback received, AI-generated responses are frequently perceived as superficial, repetitive, or even irrelevant to the conversation's context, as well as generating a wide range of perceptions of their value.

A Meta AI platform has been integrated directly into WhatsApp, unlike standalone platforms such as ChatGPT and Google Gemini, which are separately accessible by users. WhatsApp is a communication application that is used on a daily basis to communicate both personally and professionally. Because the feature was integrated without explicit consent and there were doubts about its usefulness, many users are beginning to wonder whether such pervasive AI assistance is really necessary or desirable. 

It has also been noted that there is a growing chorus of criticism about the inherent limitations of artificial intelligence in terms of reliably interpreting human communication. Many users have expressed their scepticism about AI's ability to accurately condense even one message within an active group chat, let alone synthesise hundreds of exchanges. It is not the first time Apple has faced similar challenges; Apple has faced similar challenges in the past when it had to pull an AI-powered feature that produced unintended and sometimes inaccurate summaries. 

As of today, the problem of "hallucinations," which occur in the form of factually incorrect or contextually irrelevant content generated by artificial intelligence, remains a persistent problem across nearly every generative platform, including commonly used platforms like ChatGPT. Aside from that, artificial intelligence continues to struggle with subtleties such as humour, sarcasm, and cultural nuance-aspects of natural conversation that are central to establishing a connection. 

In situations where the AI is not trained to recognise offhand or joking remarks, it can easily misinterpret those remarks. This leads to summaries that are alarmist, distorted, or completely inaccurate, as compared to human recipients' own. Due to the increased risk of misrepresentation, users who rely on WhatsApp for authentic, nuanced communication with colleagues, friends, and family are becoming more apprehensive than before. 

A philosophical objection has been raised beyond technical limitations, stating that the act of participating in a conversation is diminished by substituting real engagement for machine-generated recaps. There is a shared sentiment that the purpose of group chats lies precisely in the experience of reading and responding to the genuine voices of others while scrolling through a backlog of messages. 

However, there is a consensus that it is exhausting to scroll through such a large backlog of messages. It is believed that the introduction of Message Summaries not only threatens clear communication but also undermines the sense of personal connection that draws people into these digital communities in the first place, which is why these critics are concerned. 

In order to ensure user privacy, WhatsApp has created the Message Summaries feature using a new framework known as Private Processing, which is designed to safeguard user privacy. Meta and WhatsApp are specifically ensuring that neither the contents of their conversations nor the summaries that the AI system produces are able to be accessed by them, which is why this approach was developed. 

Instead of sending summaries to external servers, the platform is able to generate them locally on the users' devices, reinforcing its commitment to privacy. Each summary, presented in a clear bullet point format, is clearly labelled as "visible only to you," emphasising WhatsApp's privacy-centric design philosophy behind the feature as well. 

Message Summaries have shown to be especially useful in group chats in which the amount of unread messages is often overwhelming, as a result of the large volume of unread messages. With this tool, users are able to remain informed without having to read every single message, because lengthy exchanges are distilled into concise snapshots that enable them to stay updated without having to scroll through each and every individual message. 

The feature is disabled by default and needs to be activated manually, which addresses privacy concerns. Upon activating the feature, eligible chats display a discreet icon, signalling the availability of a summary without announcing it to other participants. Meta’s confidential computing infrastructure is at the core of its system, and in principle, it is comparable to Apple’s private cloud computing architecture. 

A Trusted Execution Environment (TEE) provides a foundation for Private Processing, ensuring that confidential information is handled in an effective manner, with robust measures against tampering, and clear mechanisms for ensuring transparency are in place.

A system's architecture is designed to shut down automatically or to generate verifiable evidence of the intrusion whenever any attempt is made to compromise the security assurances of the system. As well as supporting independent third-party audits, Meta has intentionally designed the framework in such a way that it will remain stateless, forward secure, and immune to targeted attacks so that Meta's claims about data protection can be verified. 

Furthermore, advanced chat privacy settings are included as a complement to these technical safeguards, as they allow users to select the conversations that will be eligible for AI-generated summaries and thus offer granular control over the use of the feature. Moreover, when a user decides to enable summaries in a chat, no notification is sent to other participants, allowing for greater discretion on the part of other participants.

There is currently a phase in which Message Summaries are being gradually introduced to users in the United States. They can only be read in English at the moment. There has been confirmation by Meta that the feature will be expanded to additional regions and supported in additional languages shortly, as part of their broader effort to integrate artificial intelligence into all aspects of their service offerings. 

As WhatsApp intensifies its efforts to embed AI capabilities deeper and deeper into everyday communication, Message Summaries marks a pivotal moment in the evolution of relationships between technology and human interaction as the company accelerates its ambition to involve AI capabilities across the entire enterprise. 

Even though the company has repeatedly reiterated that it is committed to privacy, transparency, and user autonomy, the response to this feature has been polarised, which highlights the challenges associated with incorporating artificial intelligence in spaces where trust, nuance, and human connection are paramount. 

It is a timely reminder that, for both individuals and organisations, the growth of convenience-driven automation impacts the genuine social fabric that is a hallmark of digital communities and requires a careful assessment. 

As platforms evolve, stakeholders would do well to remain vigilant with the changes to platform policies, evaluate whether such tools align with the communication values they hold dear, and consider offering structured feedback in order for these technologies to mature with maturity. As artificial intelligence continues to redefine the contours of messaging, users will need to be open to innovation while also expressing critical thought about the long-term implications on privacy, comprehension, and even the very nature of meaningful dialogue as AI use continues to grow in popularity.

OpenAI Rolls Out Premium Data Connections for ChatGPT Users


The ChatGPT solution has become a transformative artificial intelligence solution widely adopted by individuals and businesses alike seeking to improve their operations. Developed by OpenAI, this sophisticated artificial intelligence platform has been proven to be very effective in assisting users with drafting compelling emails, developing creative content, or conducting complex data analysis by streamlining a wide range of workflows. 

OpenAI is continuously enhancing ChatGPT's capabilities through new integrations and advanced features that make it easier to integrate into the daily workflows of an organisation; however, an understanding of the platform's pricing models is vital for any organisation that aims to use it efficiently on a day-to-day basis. A business or an entrepreneur in the United Kingdom that is considering ChatGPT's subscription options may find that managing international payments can be an additional challenge, especially when the exchange rate fluctuates or conversion fees are hidden.

In this context, the Wise Business multi-currency credit card offers a practical solution for maintaining financial control as well as maintaining cost transparency. This payment tool, which provides companies with the ability to hold and spend in more than 40 currencies, enables them to settle subscription payments without incurring excessive currency conversion charges, which makes it easier for them to manage budgets as well as adopt cutting-edge technology. 

A suite of premium features has been recently introduced by OpenAI that aims to enhance the ChatGPT experience for subscribers by enhancing its premium features. There is now an option available to paid users to use advanced reasoning models that include O1 and O3, which allow users to make more sophisticated analytical and problem-solving decisions. 

The subscription comes with more than just enhanced reasoning; it also includes an upgraded voice mode that makes conversational interactions more natural, as well as improved memory capabilities that allow the AI to retain context over the course of a long period of time. It has also been enhanced with the addition of a powerful coding assistant designed to help developers automate workflows and speed up the software development process. 

To expand the creative possibilities even further, OpenAI has adjusted token limits, which allow for greater amounts of input and output text and allow users to generate more images without interruption. In addition to expedited image generation via a priority queue, subscribers have the option of achieving faster turnaround times during high-demand periods. 

In addition to maintaining full access to the latest models, paid accounts are also provided with consistent performance, as they are not forced to switch to less advanced models when server capacity gets strained-a limitation that free users may still have to deal with. While OpenAI has put in a lot of effort into enriching the paid version of the platform, the free users have not been left out. GPT-4o has effectively replaced the older GPT-4 model, allowing complimentary accounts to take advantage of more capable technology without having to fall back to a fallback downgrade. 

In addition to basic imaging tools, free users will also receive the same priority in generation queues as paid users, although they will also have access to basic imaging tools. With its dedication to making AI broadly accessible, OpenAI has made additional features such as ChatGPT Search, integrated shopping assistance, and limited memory available free of charge, reflecting its commitment to making AI accessible to the public. 

ChatGPT's free version continues to be a compelling option for people who utilise the software only sporadically-perhaps to write occasional emails, research occasionally, and create simple images. In addition, individuals or organisations who frequently run into usage limits, such as waiting for long periods of time for token resettings, may find that upgrading to a paid plan is an extremely beneficial decision, as it unlocks uninterrupted access as well as advanced capabilities. 

In order to transform ChatGPT into a more versatile and deeply integrated virtual assistant, OpenAI has introduced a new feature, called Connectors, which is designed to transform the platform into an even more seamless virtual assistant. It has been enabled by this new feature for ChatGPT to seamlessly interface with a variety of external applications and data sources, allowing the AI to retrieve and synthesise information from external sources in real time while responding to user queries. 

With the introduction of Connectors, the company is moving forward towards providing a more personal and contextually relevant experience for our users. In the case of an upcoming family vacation, for example, ChatGPT can be instructed by users to scan their Gmail accounts in order to compile all correspondence regarding the trip. This allows users to streamline travel plans rather than having to go through emails manually. 

With its level of integration, Gemini is similar to its rivals, which enjoy advantages from Google's ownership of a variety of popular services such as Gmail and Calendar. As a result of Connectors, individuals and businesses will be able to redefine how they engage with AI tools in a new way. OpenAI intends to create a comprehensive digital assistant by giving ChatGPT secure access to personal or organisational data that is residing across multiple services, by creating an integrated digital assistant that anticipates needs, surfaces critical insights, streamlines decision-making processes, and provides insights. 

There is an increased demand for highly customised and intelligent assistance, which is why other AI developers are likely to pursue similar integrations to remain competitive. The strategy behind Connectors is ultimately to position ChatGPT as a central hub for productivity — an artificial intelligence that is capable of understanding, organising, and acting upon every aspect of a user’s digital life. 

In spite of the convenience and efficiency associated with this approach, it also illustrates the need to ensure that personal information remains protected while providing robust data security and transparency in order for users to take advantage of these powerful integrations as they become mainstream. In its official X (formerly Twitter) account, OpenAI has recently announced the availability of Connectors that can integrate with Google Drive, Dropbox, SharePoint, and Box as part of ChatGPT outside of the Deep Research environment. 

As part of this expansion, users will be able to link their cloud storage accounts directly to ChatGPT, enabling the AI to retrieve and process their personal and professional data, enabling it to create responses on their own. As stated by OpenAI in their announcement, this functionality is "perfect for adding your own context to your ChatGPT during your daily work," highlighting the company's ambition of making ChatGPT more intelligent and contextually aware. 

It is important to note, however, that access to these newly released Connectors is confined to specific subscriptions and geographical restrictions. A ChatGPT Pro subscription, which costs $200 per month, is exclusive to ChatGPT Pro subscribers only and is currently available worldwide, except for the European Economic Area (EEA), Switzerland and the United Kingdom. Consequently, users whose plans are lower-tier, such as ChatGPT Plus subscribers paying $20 per month, or who live in Europe, cannot use these integrations at this time. 

Typically, the staggered rollout of new technologies is a reflection of broader challenges associated with regulatory compliance within the EU, where stricter data protection regulations as well as artificial intelligence governance frameworks often delay their availability. Deep Research remains relatively limited in terms of the Connectors available outside the company. However, Deep Research provides the same extensive integration support as Deep Research does. 

In the ChatGPT Plus and Pro packages, users leveraging Deep Research capabilities can access a much broader array of integrations — for example, Outlook, Teams, Gmail, Google Drive, and Linear — but there are some restrictions on regions as well. Additionally, organisations with Team plans, Enterprise plans, or Educational plans have access to additional Deep Research features, including SharePoint, Dropbox, and Box, which are available to them as part of their Deep Research features. 

Additionally, OpenAI is now offering the Model Context Protocol (MCP), a framework which allows workspace administrators to create customised Connectors based on their needs. By integrating ChatGPT with proprietary data systems, organizations can create secure, tailored integrations, enabling highly specialized use cases for internal workflows and knowledge management that are highly specialized. 

With the increasing adoption of artificial intelligence solutions by companies, it is anticipated that the catalogue of Connectors will rapidly expand, offering users the option of incorporating external data sources into their conversations. The dynamic nature of this market underscores that technology giants like Google have the advantage over their competitors, as their AI assistants, such as Gemini, can be seamlessly integrated throughout all of their services, including the search engine. 

The OpenAI strategy, on the other hand, relies heavily on building a network of third-party integrations to create a similar assistant experience for its users. It is now generally possible to access the new Connectors in the ChatGPT interface, although users will have to refresh their browsers or update the app in order to activate the new features. 

As AI-powered productivity tools continue to become more widely adopted, the continued growth and refinement of these integrations will likely play a central role in defining the future of AI-powered productivity tools. A strategic approach is recommended for organisations and professionals evaluating ChatGPT as generative AI capabilities continue to mature, as it will help them weigh the advantages and drawbacks of deeper integration against operational needs, budget limitations, and regulatory considerations that will likely affect their decisions.

As a result of the introduction of Connectors and the advanced subscription tiers, people are clearly on a trajectory toward more personalised and dynamic AI assistance, which is able to ingest and contextualise diverse data sources. As a result of this evolution, it is also becoming increasingly important to establish strong frameworks for data governance, to establish clear controls for access to the data, and to ensure adherence to privacy regulations.

If companies intend to stay competitive in an increasingly automated landscape by investing early in these capabilities, they can be in a better position to utilise the potential of AI and set clear policies that balance innovation with accountability by leveraging the efficiencies of AI in the process. In the future, the organisations that are actively developing internal expertise, testing carefully selected integrations, and cultivating a culture of responsible AI usage will be the most prepared to fully realise the potential of artificial intelligence and to maintain a competitive edge for years to come.

Russian Threat Actors Circumvent Gmail Security with App Password Theft


 

As part of Google's Threat Intelligence Group (GTIG), security researchers discovered a highly sophisticated cyber-espionage campaign orchestrated by Russian threat actors. They succeeded in circumventing Google's multi-factor authentication (MFA) protections for Gmail accounts by successfully circumventing it. 

A group of researchers found that the attackers used highly targeted and convincing social engineering tactics by impersonating Department of State officials in order to establish trust with their victims in the process. As soon as a rapport had been built, the perpetrators manipulated their victims into creating app-specific passwords. 

These passwords are unique 16-character codes created by Google which enable secure access to certain applications and devices when two-factor authentication is enabled. As a result of using these app passwords, which bypass conventional two-factor authentication, the attackers were able to gain persistent access to sensitive emails through Gmail accounts undetected. 

It is clear from this operation that state-sponsored cyber actors are becoming increasingly inventive, and there is also a persistent risk posed by seemingly secure mechanisms for recovering and accessing accounts. According to Google, this activity was carried out by a threat cluster designated UNC6293, which is closely related to the Russian hacking group known as APT29. It is believed that UNC6293 has been closely linked to APT29, a state-sponsored hacker collective. 

APT29 has garnered attention as one of the most sophisticated and sophisticated Advanced Persistent Threat (APT) groups sponsored by the Russian government, and according to intelligence analysts, that group is an extension of the Russian Foreign Intelligence Service (SVR). It is important to note that over the past decade this clandestine collective has orchestrated a number of high-profile cyber-espionage campaigns targeting strategic entities like the U.S. government, NATO member organizations, and prominent research institutes all over the world, including the U.S. government, NATO, and a wide range of academic institutions. 

APT29's operators have a reputation for carrying out prolonged infiltration operations that can remain undetected for extended periods of time, characterised by their focus on stealth and persistence. The tradecraft of their hackers is consistently based on refined social engineering techniques that enable them to blend into legitimate communications and exploit the trust of their intended targets through their tradecraft. 

By crafting highly convincing narratives and gradually manipulating individuals into compromising security controls in a step-by-step manner, APT29 has demonstrated that it has the ability to bypass even highly sophisticated technical defence systems. This combination of patience, technical expertise, and psychological manipulation has earned the group a reputation as one of the most formidable cyber-espionage threats associated with Russian state interests. 

A multitude of names are used by this prolific group in the cybersecurity community, including BlueBravo, Cloaked Ursa, Cosy Bear, CozyLarch, ICECAP, Midnight Blizzard, and The Dukes. In contrast to conventional phishing campaigns, which are based on a sense of urgency or intimidation designed to elicit a quick response, this campaign unfolded in a methodical manner over several weeks. 

There was a deliberate approach by the attackers, slowly creating a sense of trust and familiarity with their intended targets. To make their deception more convincing, they distributed phishing emails, which appeared to be official meeting invitations that they crafted. Often, these messages were carefully constructed to appear authentic and often included the “@state.gov” domain as the CC field for at least four fabricated email addresses. 

The aim of this tactic was to create a sense of legitimacy around the communication and reduce the likelihood that the recipients would scrutinise it, which in turn increased the chances of the communication being exploited effectively. It has been confirmed that the British writer, Keir Giles, a senior consulting fellow at Chatham House, a renowned global affairs think tank, was a victim of this sophisticated campaign. 

A report indicates Giles was involved in a lengthy email correspondence with a person who claimed to be Claudia S Weber, who represented the U.S. Department of State, according to reports. More than ten carefully crafted messages were sent over several weeks, deliberately timed to coincide with Washington's standard business hours. Over time, the attacker gradually gained credibility and trust among the people who sent the messages. 

It is worth noting that the emails were sent from legitimate addresses, which were configured so that no delivery errors would occur, which further strengthened the ruse. When this trust was firmly established, the adversary escalated the scheme by sending a six-page PDF document with a cover letter resembling an official State Department letterhead that appeared to be an official State Department document. 

As a result of the instructions provided in the document, the target was instructed to access Google's account settings page, to create a 16-character app-specific password labelled "ms.state.gov, and to return the code via email under the guise of completing secure onboarding. As a result of the app password, the threat actors ended up gaining sustained access to the victim's Gmail account, bypassing multi-factor authentication altogether as they were able to access their accounts regularly. 

As the Citizen Lab experts were reviewing the emails and PDF at Giles' request, they noted that the emails and PDF were free from subtle language inconsistencies and grammatical errors that are often associated with fraudulent communications. In fact, based on the precision of the language, researchers have suspected that advanced generative AI tools have been deployed to craft polished, credible content for the purpose of evading scrutiny and enhancing the overall effectiveness of the deception as well. 

There was a well-planned, incremental strategy behind the attack campaign that was specifically geared towards increasing the likelihood that the targeted targets would cooperate willingly. As one documented instance illustrates, the threat actor tried to entice a leading academic expert to participate in a private online discussion under the pretext of joining a secure State Department forum to obtain his consent.

In order to enable guest access to Google's platform, the victim was instructed to create an app-specific password using Google's account settings. In fact, the attacker used this credential to gain access to the victim's Gmail account with complete control over all multi-factor authentication procedures, enabling them to effectively circumvent all of the measures in place. 

According to security researchers, the phishing outreach was carefully crafted to look like a routine, legitimate onboarding process, thus making it more convincing. In addition to the widespread trust that many Americans place in official communications issued by U.S. government institutions, the attackers exploited the general lack of awareness of the dangers of app-specific passwords, as well as their widespread reliance on official communications. 

A narrative of official protocol, woven together with professional-sounding language, was a powerful way of making the perpetrators more credible and decreasing the possibility of the target questioning their authenticity in their request. According to cybersecurity experts, several individuals who are at higher risk from this campaign - journalists, policymakers, academics, and researchers - should enrol in Google's Advanced Protection Program (APP). 

A major component of this initiative is the restriction of access to only verified applications and devices, which offers enhanced safeguards. The experts also advise organisations that whenever possible, they should disable the use of app-specific passwords and set up robust internal policies that require any unusual or sensitive requests to be verified, especially those originating from reputable institutions or government entities, as well as implement robust internal policies requiring these types of requests. 

The intensification of training for personnel most vulnerable to these prolonged social engineering attacks, coupled with the implementation of clear, secure channels for communication between the organisation and its staff, would help prevent the occurrence of similar breaches in the future. As a result of this incident, it serves as an excellent reminder that even mature security ecosystems remain vulnerable to a determined adversary combining psychological manipulation with technical subterfuge when attempting to harm them. 

With threat actors continually refining their methods, organisations and individuals must recognise that robust cybersecurity is much more than merely a set of tools or policies. In order to combat cyberattacks as effectively as possible, it is essential to cultivate a culture of vigilance, scepticism, and continuous education. In particular, professionals who routinely take part in sensitive research, diplomatic relations, or public relations should assume they are high-value targets and adopt a proactive defence posture. 

Consequently, any unsolicited instructions must be verified by a separate, trusted channel, hardware security keys should be used to supplement authentication, and account settings should be reviewed regularly for unauthorised changes. For their part, institutions should ensure that security protocols are both accessible and clearly communicated as they are technically sound by investing in advanced threat intelligence, simulating sophisticated phishing scenarios, and investing in advanced threat intelligence. 

Fundamentally, resilience against state-sponsored cyber-espionage is determined by the ability to plan in advance not only how adversaries are going to deploy their tactics, but also the trust they will exploit in order to reach their goals.