Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Google+. Show all posts

Russian Threat Actors Circumvent Gmail Security with App Password Theft


 

As part of Google's Threat Intelligence Group (GTIG), security researchers discovered a highly sophisticated cyber-espionage campaign orchestrated by Russian threat actors. They succeeded in circumventing Google's multi-factor authentication (MFA) protections for Gmail accounts by successfully circumventing it. 

A group of researchers found that the attackers used highly targeted and convincing social engineering tactics by impersonating Department of State officials in order to establish trust with their victims in the process. As soon as a rapport had been built, the perpetrators manipulated their victims into creating app-specific passwords. 

These passwords are unique 16-character codes created by Google which enable secure access to certain applications and devices when two-factor authentication is enabled. As a result of using these app passwords, which bypass conventional two-factor authentication, the attackers were able to gain persistent access to sensitive emails through Gmail accounts undetected. 

It is clear from this operation that state-sponsored cyber actors are becoming increasingly inventive, and there is also a persistent risk posed by seemingly secure mechanisms for recovering and accessing accounts. According to Google, this activity was carried out by a threat cluster designated UNC6293, which is closely related to the Russian hacking group known as APT29. It is believed that UNC6293 has been closely linked to APT29, a state-sponsored hacker collective. 

APT29 has garnered attention as one of the most sophisticated and sophisticated Advanced Persistent Threat (APT) groups sponsored by the Russian government, and according to intelligence analysts, that group is an extension of the Russian Foreign Intelligence Service (SVR). It is important to note that over the past decade this clandestine collective has orchestrated a number of high-profile cyber-espionage campaigns targeting strategic entities like the U.S. government, NATO member organizations, and prominent research institutes all over the world, including the U.S. government, NATO, and a wide range of academic institutions. 

APT29's operators have a reputation for carrying out prolonged infiltration operations that can remain undetected for extended periods of time, characterised by their focus on stealth and persistence. The tradecraft of their hackers is consistently based on refined social engineering techniques that enable them to blend into legitimate communications and exploit the trust of their intended targets through their tradecraft. 

By crafting highly convincing narratives and gradually manipulating individuals into compromising security controls in a step-by-step manner, APT29 has demonstrated that it has the ability to bypass even highly sophisticated technical defence systems. This combination of patience, technical expertise, and psychological manipulation has earned the group a reputation as one of the most formidable cyber-espionage threats associated with Russian state interests. 

A multitude of names are used by this prolific group in the cybersecurity community, including BlueBravo, Cloaked Ursa, Cosy Bear, CozyLarch, ICECAP, Midnight Blizzard, and The Dukes. In contrast to conventional phishing campaigns, which are based on a sense of urgency or intimidation designed to elicit a quick response, this campaign unfolded in a methodical manner over several weeks. 

There was a deliberate approach by the attackers, slowly creating a sense of trust and familiarity with their intended targets. To make their deception more convincing, they distributed phishing emails, which appeared to be official meeting invitations that they crafted. Often, these messages were carefully constructed to appear authentic and often included the “@state.gov” domain as the CC field for at least four fabricated email addresses. 

The aim of this tactic was to create a sense of legitimacy around the communication and reduce the likelihood that the recipients would scrutinise it, which in turn increased the chances of the communication being exploited effectively. It has been confirmed that the British writer, Keir Giles, a senior consulting fellow at Chatham House, a renowned global affairs think tank, was a victim of this sophisticated campaign. 

A report indicates Giles was involved in a lengthy email correspondence with a person who claimed to be Claudia S Weber, who represented the U.S. Department of State, according to reports. More than ten carefully crafted messages were sent over several weeks, deliberately timed to coincide with Washington's standard business hours. Over time, the attacker gradually gained credibility and trust among the people who sent the messages. 

It is worth noting that the emails were sent from legitimate addresses, which were configured so that no delivery errors would occur, which further strengthened the ruse. When this trust was firmly established, the adversary escalated the scheme by sending a six-page PDF document with a cover letter resembling an official State Department letterhead that appeared to be an official State Department document. 

As a result of the instructions provided in the document, the target was instructed to access Google's account settings page, to create a 16-character app-specific password labelled "ms.state.gov, and to return the code via email under the guise of completing secure onboarding. As a result of the app password, the threat actors ended up gaining sustained access to the victim's Gmail account, bypassing multi-factor authentication altogether as they were able to access their accounts regularly. 

As the Citizen Lab experts were reviewing the emails and PDF at Giles' request, they noted that the emails and PDF were free from subtle language inconsistencies and grammatical errors that are often associated with fraudulent communications. In fact, based on the precision of the language, researchers have suspected that advanced generative AI tools have been deployed to craft polished, credible content for the purpose of evading scrutiny and enhancing the overall effectiveness of the deception as well. 

There was a well-planned, incremental strategy behind the attack campaign that was specifically geared towards increasing the likelihood that the targeted targets would cooperate willingly. As one documented instance illustrates, the threat actor tried to entice a leading academic expert to participate in a private online discussion under the pretext of joining a secure State Department forum to obtain his consent.

In order to enable guest access to Google's platform, the victim was instructed to create an app-specific password using Google's account settings. In fact, the attacker used this credential to gain access to the victim's Gmail account with complete control over all multi-factor authentication procedures, enabling them to effectively circumvent all of the measures in place. 

According to security researchers, the phishing outreach was carefully crafted to look like a routine, legitimate onboarding process, thus making it more convincing. In addition to the widespread trust that many Americans place in official communications issued by U.S. government institutions, the attackers exploited the general lack of awareness of the dangers of app-specific passwords, as well as their widespread reliance on official communications. 

A narrative of official protocol, woven together with professional-sounding language, was a powerful way of making the perpetrators more credible and decreasing the possibility of the target questioning their authenticity in their request. According to cybersecurity experts, several individuals who are at higher risk from this campaign - journalists, policymakers, academics, and researchers - should enrol in Google's Advanced Protection Program (APP). 

A major component of this initiative is the restriction of access to only verified applications and devices, which offers enhanced safeguards. The experts also advise organisations that whenever possible, they should disable the use of app-specific passwords and set up robust internal policies that require any unusual or sensitive requests to be verified, especially those originating from reputable institutions or government entities, as well as implement robust internal policies requiring these types of requests. 

The intensification of training for personnel most vulnerable to these prolonged social engineering attacks, coupled with the implementation of clear, secure channels for communication between the organisation and its staff, would help prevent the occurrence of similar breaches in the future. As a result of this incident, it serves as an excellent reminder that even mature security ecosystems remain vulnerable to a determined adversary combining psychological manipulation with technical subterfuge when attempting to harm them. 

With threat actors continually refining their methods, organisations and individuals must recognise that robust cybersecurity is much more than merely a set of tools or policies. In order to combat cyberattacks as effectively as possible, it is essential to cultivate a culture of vigilance, scepticism, and continuous education. In particular, professionals who routinely take part in sensitive research, diplomatic relations, or public relations should assume they are high-value targets and adopt a proactive defence posture. 

Consequently, any unsolicited instructions must be verified by a separate, trusted channel, hardware security keys should be used to supplement authentication, and account settings should be reviewed regularly for unauthorised changes. For their part, institutions should ensure that security protocols are both accessible and clearly communicated as they are technically sound by investing in advanced threat intelligence, simulating sophisticated phishing scenarios, and investing in advanced threat intelligence. 

Fundamentally, resilience against state-sponsored cyber-espionage is determined by the ability to plan in advance not only how adversaries are going to deploy their tactics, but also the trust they will exploit in order to reach their goals.

AI Integration Raises Alarms Over Enterprise Data Safety

 


Today's digital landscape has become increasingly interconnected, and cyber threats have risen in sophistication, which has significantly weakened the effectiveness of traditional security protocols. Cybercriminals have evolved their tactics to exploit emerging vulnerabilities, launch highly targeted attacks, and utilise advanced techniques to breach security perimeters to gain access to and store large amounts of sensitive and mission-critical data, as enterprises continue to generate and store significant volumes of sensitive data.

In light of this rapidly evolving threat environment, organisations are increasingly forced to adopt more adaptive and intelligent security solutions in addition to conventional defences. In the field of cybersecurity, artificial intelligence (AI) has emerged as a significant force, particularly in the area of data protection. 

AI-powered data security frameworks are revolutionising the way threats are detected, analysed, and mitigated in real time, making it a transformative force. This solution enhances visibility across complex IT ecosystems, automates threat detection processes, and supports rapid response capabilities by identifying patterns and anomalies that might go unnoticed by human analysts.

Additionally, artificial intelligence-driven systems allow organisations to develop risk mitigation strategies that are scalable as well as aligned with their business objectives while implementing risk-based mitigation strategies. The integration of artificial intelligence plays a crucial role in maintaining regulatory compliance in an era where data protection laws are becoming increasingly stringent, in addition to threat prevention. 

By continuously monitoring and assessing cybersecurity postures, artificial intelligence is able to assist businesses in upholding industry standards, minimising operations interruptions, and strengthening stakeholder confidence. Modern enterprises need to recognise that AI-enabled data security is no longer a strategic advantage, but rather a fundamental requirement for safeguarding digital assets in a modern enterprise, as the cyber threat landscape continues to evolve. 

Varonis has recently revealed that 99% of organisations have their sensitive data exposed to artificial intelligence systems, a shocking finding that illustrates the importance of data-centric security. There has been a significant increase in the use of artificial intelligence tools in business operations over the past decade. The State of Data Security: Quantifying Artificial Intelligence's Impact on Data Risk presents an in-depth analysis of how misconfigured settings, excessive access rights and neglected security gaps are leaving critical enterprise data vulnerable to AI-driven exploitation. 

An important characteristic of this report is that it relies on extensive empirical analysis rather than opinion surveys. In order to evaluate the risk associated with data across 1,000 organisations, Varonis conducted a comprehensive analysis of data across a variety of cloud computing environments, including the use of over 10 billion cloud assets and over 20 petabytes of sensitive data. 

Among them were platforms such as Amazon Web Services, Google Cloud Services, Microsoft Azure Services, Microsoft 365 Services, Salesforce, Snowflake, Okta, Databricks, Slack, Zoom, and Box, which provided a broad and realistic picture of enterprise data exposure in the age of Artificial Intelligence. The CEO, President, and Co-Founder of Varonis, Yaaki Faitelson, stressed the importance of balancing innovation with risk, noting that, even though AI is undeniable in increasing productivity, it also poses serious security issues. 

Due to the growing pressure on CIOs and CISOs to adopt artificial intelligence technologies at a rapid rate, advanced data security platforms are in increasing demand. It is important to take a proactive, data-oriented approach to cybersecurity to prevent AI from becoming a gateway to large-scale data breaches, says Faitelson. It is important to note that researchers are also exploring two critical dimensions of risk as they relate to large language models (LLMs) as well as AI copilots: human-to-machine interaction and machine-to-machine integrity, which are both critical aspects of risk pertaining to AI-driven data exposure. 

A key focus of the study was on how sensitive data, such as employee compensation details, intellectual property rights, proprietary software, and confidential research and development insights able to be unintentionally accessed, leaked, or misused by using just a single prompt into an artificial intelligence interface if it is not protected. As AI assistants are being increasingly used throughout departments, the risk of inadvertently disclosing critical business information has increased considerably. 

Additionally, two categories of risk should be addressed: the integrity and trustworthiness of the data used to train or enhance artificial intelligence systems. It is common for machine-to-machine vulnerabilities to arise when flawed, biased, or deliberately manipulated datasets are introduced into the learning cycle of machine learning algorithms. 

As a consequence of such corrupted data, it can result in far-reaching and potentially dangerous consequences. For example, inaccurate or falsified clinical information could lead to life-saving medical treatments being developed, while malicious actors may embed harmful code within AI training pipelines, introducing backdoors or vulnerabilities to applications that aren't immediately detected at first. 

The dual-risk framework emphasises the importance of tackling artificial intelligence security holistically, one that takes into account the entire lifecycle of data, from acquisition and input to training and deployment, not just the user-level controls. Considering both human-induced and systemic risks associated with generative AI tools, organisations can implement more resilient safeguards to ensure that their most valuable data assets are protected as much as possible. 

Organisations should reconsider and go beyond conventional governance models to secure sensitive data in the age of AI. In an environment where AI systems require dynamic, expansive access to vast datasets, traditional approaches to data protection -often rooted in static policies and role-based access -are no longer sufficient. 

Towards the future of AI-ready security, a critical balance must be struck between ensuring robust protection against misuse, leakage, and regulatory non-compliance, while simultaneously enabling data access for innovation. Organisations need to adopt a multilayered, forward-thinking security strategy customised for AI ecosystems to meet these challenges. 

It is important to note that some key components of a data-tagging and classification strategy are the identification and categorisation of sensitive information to determine how it should be handled depending on the criticality of the information. As a replacement for role-based access control (RBAC), attribute-based access control (ABAC) should allow for more granular access policies based on the identity of the user, context, and the sensitivity of the data. 

Aside from that, organisations need to design data pipelines that are AI-aware and incorporate proactive security checkpoints into them so as to monitor how their data is used by artificial intelligence tools. Additionally, output validation becomes crucial—it involves implementing mechanisms that ensure outputs generated by artificial intelligence are compliant, accurate, and potentially risky before they are circulated internally or externally. 

The complexity of this landscape has only been compounded by the rise of global regulations and regional regulations that govern data protection and artificial intelligence. In addition to the general data privacy frameworks of GDPR and CCPA, businesses will now need to prepare themselves for emerging AI-specific regulations that will put a stronger emphasis on how AI systems access and process sensitive data. As a result of this regulatory evolution, organisations need to maintain a security posture that is both agile and anticipatable.

Matillion Data Productivity Cloud, for instance, is a solution that embodies this principle of "secure by design". As a hybrid cloud SaaS platform tailored to enterprise environments, Matillion has created a platform that is well-suited to secure enterprise environments. 

With its standardised encryption and authentiyoucation protocols, the platform is easily integrated into enterprise networks through the use of a secure cloud infrastructure. This platform is built around a pushdown architecture that prevents customer data from leaving the organisation's own cloud environment while allowing advanced orchestration of complex data workflows in order to minimise the risk of data exposure.

Rather than focusing on data movement, Matillion's focus is on metadata management and workflow automation, providing organisations with a secure, efficient data operation, allowing them to gain insights faster with a higher level of data integrity and compliance. Organisations must move towards a paradigm shift—where security is woven into the fabric of the data lifecycle—as AI poses a dual pressure on organisations. 

A shift from traditional governance systems to more adaptive, intelligent frameworks will help secure data in the AI era. Because AI systems require broad access to enterprise data, organisations must strike a balance between openness and security. To achieve this, data can be tagged and classified and attributes can be used to manage access precisely, attribute-based access controls should be implemented for precise control of access, and AI-aware data pipelines must be built with security checks, and output validation must be performed to prevent the distribution of risky or non-compliant AI-generated results. 

With the rise of global and AI-specific regulations, companies need to develop compliance strategies that will ensure future success. Matillion Data Productivity Cloud is an example of a platform which offers a secure-by-design solution, as it combines a hybrid SaaS architecture with enterprise-grade security and security controls. 

Through its pushdown processing, the customer's data will stay within the organisation's cloud environment while the workflows are orchestrated safely and efficiently. In this way, organisations can make use of AI confidently without sacrificing data security or compliance with the laws and regulations. As artificial intelligence and enterprise data security rapidly evolve, organisations need to adopt a future-oriented mindset that emphasises agility, responsibility, and innovation. 

It is no longer possible to rely on reactive cybersecurity; instead, businesses must embrace AI-literate governance models, advance threat intelligence capabilities, and secure infrastructures designed with security in mind. Data security must be embedded into all phases of the data lifecycle, from creation and classification to accessing, analysing, and transforming it with AI. Developing a culture of continuous risk evaluation is a must for leadership teams, and IT and data teams must be empowered to collaborate with compliance, legal, and business units proactively. 

In order to maintain trust and accountability, it will be imperative to implement clear policies regarding AI usage, ensure traceability in data workflows, and establish real-time auditability. Further, with the maturation of AI regulations and the increasing demands for compliance across a variety of sectors, forward-looking organisations should begin aligning their operational standards with global best practices rather than waiting for mandatory regulations to be passed. 

A key component of artificial intelligence is data, and the protection of that foundation is a strategic imperative as well as a technical obligation. By putting the emphasis on resilient, ethical, and intelligent data security, today's companies will not only mitigate risk but will also be able to reap the full potential of AI tomorrow.

Rust-Developed InfoStealer Extracts Sensitive Data from Chromium-Based Browsers

Rust-Developed InfoStealer Extracts Sensitive Data from Chromium-Based Browsers

Browsers at risk

The latest information-stealing malware, made in the Rust programming language, has surfaced as a major danger to users of Chromium-based browsers such as Microsoft Edge, Google Chrome, and others. 

Known as “RustStealer” by cybersecurity experts, this advanced malware is made to retrieve sensitive data, including login cookies, browsing history, and credentials, from infected systems. 

Evolution of Rust language

The growth in Rust language known for memory safety and performance indicates a transition toward more resilient and hard-to-find problems, as Rust binaries often escape traditional antivirus solutions due to their combined nature and lower order in malware environments. 

RustStealers works with high secrecy, using sophisticated obfuscation techniques to escape endpoint security tools. Initial infection vectors hint towards phishing campaigns, where dangerous attachments or links in evidently genuine emails trick users into downloading the payload. 

After execution, the malware makes persistence via registry modifications or scheduled tasks, to make sure it remains active even after the system reboots. 

Distribution Mechanisms

The main aim is on Chromium-based browsers, abusing the accessibility of unencrypted information stored in browser profiles to harvest session tokens, usernames, and passwords. 

Besides this, RustStealer has been found to extract data to remote C2 servers via encrypted communication channels, making detection by network surveillance tools such as Wireshark more challenging.

Experts have also observed its potential to attack cryptocurrency wallet extensions, exposing users to risks in managing digital assets via browser plugins. This multi-faceted approach highlights the malware’s goal to increase data robbery while reducing the chances of early detection, a technique similar to advanced persistent threats (APTs).

About RustStealer malware

What makes RustStealer different is its modular build, letting hackers rework its strengths remotely. This flexibility reveals that future ve

This adaptability suggests that future replications could integrate functionalities such as ransomware components or keylogging, intensifying threats in the longer run. 

The deployment of Rust also makes reverse-engineering efforts difficult, as the language’s output is less direct to decompile in comparison to scripts like Python or other languages deployed in outdated malware strains. 

Businesses are advised to remain cautious, using strong phishing securities, frequently updating browser software, and using endpoint detection and response (EDR) solutions to detect suspicious behavior. 

APT41 Exploits Google Calendar in Stealthy Cyberattack; Google Shuts It Down

 

Chinese state-backed threat actor APT41 has been discovered leveraging Google Calendar as a command-and-control (C2) channel in a sophisticated cyber campaign, according to Google’s Threat Intelligence Group (TIG). The team has since dismantled the infrastructure and implemented defenses to block similar future exploits.

The campaign began with a previously breached government website — though TIG didn’t disclose how it was compromised — which hosted a ZIP archive. This file was distributed to targets via phishing emails.

Once downloaded, the archive revealed three components: an executable file and a dynamic-link library (DLL) disguised as image files, and a Windows shortcut (LNK) masquerading as a PDF. When users attempted to open the phony PDF, the shortcut activated the DLL, which then decrypted and launched a third file containing the actual malware, dubbed ToughProgress.

Upon execution, ToughProgress connected to Google Calendar to retrieve its instructions, embedded within event descriptions or hidden calendar events. The malware then exfiltrated stolen data by creating a zero-minute calendar event on May 30, embedding the encrypted information within the event's description field.

Google noted that the malware’s stealth — avoiding traditional file installation and using a legitimate Google service for communication — made it difficult for many security tools to detect.

To mitigate the threat, TIG crafted specific detection signatures, disabled the threat actor’s associated Workspace accounts and calendar entries, updated file recognition tools, and expanded its Safe Browsing blocklist to include malicious domains and URLs linked to the attack.

Several organizations were reportedly targeted. “In partnership with Mandiant Consulting, GTIG notified the compromised organizations,” Google stated. “We provided the notified organizations with a sample of TOUGHPROGRESS network traffic logs, and information about the threat actor, to aid with detection and incident response.”

Google did not disclose the exact number of impacted entities.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

Google Unveils AI With Deep Reasoning and Creative Video Capabilities

 


This week, Google, as part of its annual Google Marketing Live 2025 event, unveiled a comprehensive suite of artificial intelligence-powered tools to help the company cement its position at the forefront of digital commerce and advertising on Wednesday, May 21, at a press conference.

Google's new tools are intended to revolutionise the way brands engage with consumers and drive measurable growth through artificial intelligence, and they are part of a strategic push that Google is making to redefine the future of advertising and online shopping. In her presentation, Vidhya Srinivasan, Vice President and General Manager of Google Ads and Commerce, stressed the importance of this change, saying, “The future of advertising is already here, fueled by artificial intelligence.” 

This declaration was followed by Google's announcement of advanced solutions that will enable businesses to use smarter bidding, dynamic creative creation, and intelligent, agent-based assistants in real-time, which can adjust to user behaviour and market conditions, as well as adapt to changing market conditions. Google has launched this major product at a critical time in its history, as generative AI platforms and conversational search tools are putting unprecedented pressure on traditional search and shopping channels, diverting users away from these methods. 

By leveraging technological disruptions as an opportunity for brands and marketers around the world, Google underscores its commitment to staying ahead of the curve by creating innovation-driven opportunities for brands and marketers. A long time ago, Google began to explore artificial intelligence, and since its inception in 1998, it has evolved steadily. Google’s journey into artificial intelligence dates back much earlier than many people think. 

While Google has always been known for its groundbreaking PageRank algorithm, its formal commitment to artificial intelligence accelerated throughout the mid-2000s when key milestones like the acquisition of Pyra Labs in 2003 and the launch of Google Translate in 2006 were key milestones. It is these early efforts that laid the foundation for analysing content and translating it using AI. It was not long before Google Instant was introduced in 2010 as an example of how predictive algorithms were enhancing user experience by providing real-time search query suggestions. 

In the years that followed, artificial intelligence research and innovation became increasingly important, as evidenced by Google X's establishment in 2011 and DeepMind's strategic acquisition in 2014, pioneers in reinforcement learning that created the historic algorithm AlphaGo. A new wave of artificial intelligence has been sweeping across the globe since 2016 with Google Assistant and advanced tools like TensorFlow, which have democratized machine learning development. 

Breakthroughs such as Duplex have highlighted AI's increasing conversational sophistication, but most recently, Google's AI has embraced multimodal capabilities, which is why models like BERT, LaMDA, and PaLM are revolutionising language understanding and dialogue in a way previously unknown to the world. AI has a rich legacy that underscores its crucial role in driving Google’s transformation across search, creativity, and business solutions, underpinned by this legacy. 

As part of its annual developer conference in 2025, Google I/O reaffirmed its leadership in the rapidly developing field of artificial intelligence by unveiling an impressive lineup of innovations that promise to revolutionize the way people interact with technology, reaffirming its leadership in this field. In addition to putting a heavy emphasis on artificial intelligence-driven transformation, this year's event showcased next-generation models and tools that are far superior to the ones displayed in previous years. 

Among the announcements made by AI are the addition of AI assistants with deeper contextual intelligence, to the creation of entire videos with dialogue, which highlights a monumental leap forward in both the creative and cognitive capabilities of AI in general. It was this technological display that was most highlighted by the unveiling of Gemini 2.5, Google's most advanced artificial intelligence model. This model is positioned as the flagship model of the Gemini series, setting new industry standards for outstanding performance across key dimensions, such as reasoning, speed, and contextual awareness, which is among the most important elements of the model. 

The Gemini 2.5 model has outperformed its predecessors and rivals, including Google's own Gemini Flash, which has redefined expectations for what artificial intelligence can do. Among the model's most significant advantages is its enhanced problem-solving ability, which makes it far more than just a tool for retrieving information; it is also a true cognitive assistant because it provides precise, contextually-aware responses to complex and layered queries. 

 It has significantly enhanced capabilities, but it operates at a faster pace and with better efficiency, which makes it easier to integrate into real-time applications, from customer support to high-level planning tools, seamlessly. Additionally, the model's advanced understanding of contextual cues allows it to conduct intelligent, more coherent conversations, allowing it to feel more like a human being collaborating rather than interacting with a machine. This development marks a paradigm shift in artificial intelligence in addition to incremental improvements. 

It is a sign that artificial intelligence is moving toward a point where systems are capable of reasoning, adapting, and contributing in meaningful ways across the creative, technical, and commercial spheres. Google I/O 2025 serves as a preview of a future where AI will become an integral part of productivity, innovation, and experience design for digital creators, businesses, and developers alike. 

Google has announced that it is adding major improvements to its Gemini large language model lineup, which marks another major step forward in Google's quest to develop more powerful, adaptive artificial intelligence systems, building on the momentum of its breakthroughs in artificial intelligence. The new iterations, Gemini 2.5 Flash and Gemini 2.5 Pro, feature significant architectural improvements that aim to optimise performance across a wide range of uses. 

It will be available in early June 2025 in general availability as Gemini 2.5 Flash, a fast and lightweight processor designed for high-speed and lightweight use, and the more advanced Pro version will appear shortly afterwards as well. Among the most notable features of the Pro model is the introduction of “Deep Think” which provides advanced reasoning techniques to handle complex tasks using parallel processing techniques to handle complex issues. 

As a result of its inspiration from AlphaGo's strategic modelling, Deep Think gives AI the ability to simultaneously explore various solution paths, producing faster and more accurate results. With this capability, the model is well-positioned to offer a cutting-edge solution for reasoning at the highest level, mathematical analysis, and programming that meets the demands of competition. When Demiss Hassabis, CEO of Google DeepMind, held a press briefing to highlight the model's breakthrough performance, he highlighted its impressive performance on the USAMO 2025, a challenging math challenge that is a challenging one in the world, and LiveCodeBench, another benchmark that is a popular one in advanced coding.

A statement by Hassabis said, “Deep Think pushed the performance of models to the limit, resulting in groundbreaking results.” Google is adopting a cautious release strategy to comply with its commitment to ethical AI deployment. In order to ensure safety, reliability, and transparency, Deep Think will initially be accessible only to a limited number of trusted testers who will be able to provide feedback. 

In addition to demonstrating Google's intent to responsibly scale frontier AI capabilities, this deliberate rollout emphasises the importance of maintaining trust and control while showcasing the company's commitment to it. In addition to its creative AI capabilities, Google announced two powerful models for generative media during its latest announcements: Veo 3 for video generation and Imagen 4. These models represent significant breakthroughs in generative media technology. 

There has been a shift in artificial intelligence-assisted content creation in recent years, and these innovations provide creators with a much deeper, more immersive toolkit that allows them to tell visual and audio stories in a way that is truly remarkable in terms of realism and precision. Veo 3 represents a transformative leap in video generation technology, and for the first time, artificial intelligence-generated videos do not only comprise silent, motion-only clips anymore, but also provide a wide range of visual effects and effects. 

As a result of the integration of fully synchronised audio with Veo 3, the experience felt more like a real cinematic production than a simple algorithmic output, with ambient sounds, sound effects, and even real-time dialogue between characters, as it was in the original film. "For the first time in history, we are entering into a new era of video creation," said Demis Hassabis, CEO of Google DeepMind, highlighting how both the visual fidelity and the auditory depth of the new model were highlighted. As a result of these breakthroughs, Google has developed Flow, a new AI-powered filmmaking platform exclusively for creative professionals, which integrates these breakthroughs into Flow. 

Flow is Google's latest generative modelling tool that combines the most advanced models into an intuitive interface, so storytellers can design cinematic sequences with greater ease and fluidity than ever before. In Flow, the company claims it will recreate the intuitive, inspired creative process, where iteration feels effortless and ideas evolve in a way that is effortless and effortless. Flow has already been used by several filmmakers to create short films that illustrate the creative potential of the technology, combining Flow's capabilities with traditional methods to create the films.

Additionally, Imagen 4 is the latest update to Google's image generation model, offering extraordinary improvements in visual clarity, fine detail, and especially in typography and text rendering, as well as providing unparalleled advancements in visual clarity and fine detail. With these improvements, it has become a powerful tool for marketers, designers, and content creators who need to create beautiful visuals combining high-quality imagery with precise, readable text. 

The Imagen 4 platform is a significant step forward in advancing the quality of visual storytelling based on artificial intelligence, whether for branding, digital campaigns, or presentations. Despite fierce competition from leading technology companies, Google has made significant advancements in autonomous artificial intelligence agents at a time when the landscape of intelligent automation is rapidly evolving.

It is no secret that Microsoft's GitHub Copilot has already demonstrated how powerful AI-driven development assistants can be, but OpenAI's CodeX platform continues to push the boundaries of what AI has to offer. It is in this context that Google introduced innovative tools like Stitch and Jules that could generate a complete website, a codebase, and a user interface automatically without any human input. These tools signal a revolution in how software developers develop and create digital content. A convergence of autonomous artificial intelligence technologies from a variety of industry giants underscores a trend towards automating increasingly complex knowledge tasks. 

Through the use of these AI systemorganisationsons can respond quickly to changing market demands and evolving consumer preferences by providing real-time recommendations and dynamic adjustments. Through such responsiveness, an organisation is able to optimise operational efficiency, maximise resource utilisation, and create sustainable growth by ensuring that the company remains tightly aligned with its strategic goals. AI provides businesses with actionable insights that enable them to compete more effectively in an increasingly complex and fast-paced market place by providing actionable insights. 

Aside from software and business applications, Google's AI innovations also have great potential to have a dramatic impact on the healthcare sector, where advancements in diagnostic accuracy and personalised treatment planning have the potential to greatly improve the outcomes for patients. Furthermore, improvements in the field of natural language processing and multimodal interaction models will help provide more intuitive, accessible and useful user interfaces for users from diverse backgrounds, thus reducing barriers to adoption and enabling them to make the most of technology. 

In the future, when artificial intelligence becomes an integral part of today's everyday lives, its influence will be transformative, affecting industries, redefining workflows, and generating profound social effects. The fact that Google leads the way in this space not only implies a future where artificial intelligence will augment human capabilities, but it also signals the arrival of a new era of progress in science, economics, and culture as a whole.

Cybercriminals Are Dividing Tasks — Why That’s a Big Problem for Cybersecurity Teams

 



Cyberattacks aren’t what they used to be. Instead of one group planning and carrying out an entire attack, today’s hackers are breaking the process into parts and handing each step to different teams. This method, often seen in cybercrime now, is making it more difficult for security experts to understand and stop attacks.

In the past, cybersecurity analysts looked at threats by studying them as single operations done by one group with one goal. But that method is no longer enough. These days, many attackers specialize in just one part of an attack—like finding a way into a system, creating malware, or demanding money—and then pass on the next stage to someone else.

To better handle this shift, researchers from Cisco Talos, a cybersecurity team, have proposed updating an older method called the Diamond Model. This model originally focused on four parts of a cyberattack: the attacker, the target, the tools used, and the systems involved. The new idea is to add a fifth layer that shows how different hacker groups are connected and work together, even if they don’t share the same goals.

By tracking relationships between groups, security teams can better understand who is doing what, avoid mistakes when identifying attackers, and spot patterns across different incidents. This helps them respond more accurately and efficiently.

The idea of cybercriminals selling services isn’t new. For years, online forums have allowed criminals to buy and sell services—like renting out access to hacked systems or offering ransomware as a package. Some of these relationships are short-term, while others involve long-term partnerships where attackers work closely over time.

In one recent case, a group called ToyMaker focused only on breaking into systems. They then passed that access to another group known as Cactus, which launched a ransomware attack. This type of teamwork shows how attackers are now outsourcing parts of their operations, which makes it harder for investigators to pin down who’s responsible.

Other companies, like Elastic and Google’s cyber threat teams, have also started adapting their systems to deal with this trend. Google, for example, now uses separate labels to track what each group does and what motivates them—whether it's financial gain, political beliefs, or personal reasons. This helps avoid confusion when different groups work together for different reasons.

As cybercriminals continue to specialize, defenders will need smarter tools and better models to keep up. Understanding how hackers divide tasks and form networks may be the key to staying one step ahead in this ever-changing digital battlefield.

Google’s New Android Security Update Might Auto-Reboot Your Phone After 3 Days

 

In a recent update to Google Play Services, the tech giant revealed a new security feature that could soon reboot your Android smartphone automatically — and this move could actually boost your device’s safety.

According to the update, Android phones left unused for three consecutive days will automatically restart. While this might sound intrusive at first, the reboot comes with key security benefits.

There are two primary reasons why this feature is important:

First, after a reboot, the only way to unlock a phone is by entering the PIN — biometric options like fingerprint or facial recognition won’t work until the PIN is input manually. This ensures added protection, especially for users who haven’t set up any screen lock. A forced PIN entry makes it much harder for unauthorized individuals to access your device or the data on it.

Second, the update enhances encryption security. Android devices operate in two states: Before First Unlock (BFU) and After First Unlock (AFU). In the BFU state, your phone’s contents are completely encrypted, meaning that even advanced tools can’t extract the data.

This security measure also affects how law enforcement and investigative agencies handle seized phones. Since the BFU state kicks in automatically after a reboot, authorities have a limited window to access a device before it locks down data access completely.

“A BFU phone remains connected to Wi-Fi or mobile data, meaning that if you lose your phone and it reboots, you'll still be able to use location-finding services.”

The feature is listed in Google’s April 2025 System release notes, and while it appears to extend to Android tablets, it won’t apply to wearables like the Pixel Watch, Android Auto, or Android TVs.

As of now, Google hasn’t clarified whether users will have the option to turn off this feature or customize the three-day timer.

Because it’s tied to Google Play Services, users will receive the feature passively — there’s no need for a full system update to access it.

Google to Pay Texas $1.4 Billion For Collecting Personal Data

 

The state of Texas has declared victory after reaching a $1 billion-plus settlement from Google parent firm Alphabet over charges that it illegally tracked user activity and collected private data. 

Texas Attorney General Ken Paxton announced the state's highest sanctions to date against the tech behemoth for how it manages the data that people generate when they use Google and other Alphabet services. 

“For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won,” Paxton noted in a May 9 statement announcing the settlement.

“This $1.375 billion settlement is a major win for Texans’ privacy and tells companies that they will pay for abusing our trust. I will always protect Texans by stopping Big Tech’s attempts to make a profit by selling away our rights and freedoms.”

The dispute dates back to 2022, when the Texas Attorney General's Office filed a complaint against Google and Alphabet, saying that the firm was illegally tracking activities, including Incognito searches and geolocation. The state also claimed that Google and Alphabet acquired biometric information from consumers without their knowledge or consent. 

According to Paxton's office, the Texas settlement is by far the highest individual penalty imposed on Google for alleged user privacy violations and data collecting, with the previous high being a $341 million settlement with a coalition of 41 states in a collective action. 

The AG's office declined to explain how the funds will be used. However, the state maintains a transparency webpage that details the programs it funds through penalties. The settlement is not the first time Google has encountered regulatory issues in the Lone Star State. 

The company previously agreed to pay two separate penalties of $8 million and $700 million in response to claims that it used deceptive marketing techniques and violated anti-competitive laws. 

Texas also went after other tech behemoths, securing a $1.4 billion settlement from Facebook parent firm Meta over allegations that it misused data and misled its customers about its data gathering and retention practices. 

The punitive restrictions are not uncommon in Texas, which has a long history of favouring and protecting major firms through legislation and legal policy. Larger states, such as Texas, can also have an impact on national policy and company decisions due to their population size.

This Free Tool Helps You Find Out if Your Personal Information Is Exposed Online

 


Many people don't realize how much of their personal data is floating around the internet. Even if you're careful and don’t use the internet much, your information like name, address, phone number, or email could still be listed on various websites. This can lead to annoying spam or, in serious cases, scams and fraud.

To help people become aware of this, ExpressVPN has created a free tool that lets you check where your personal information might be available online.


How the Tool Works

Using the tool is easy. You just enter your first and last name, age, city, and state. Once done, the tool scans 68 websites that collect and sell user data. These are called data broker sites.

It then shows whether your details, such as phone number, email address, location, or names of your relatives, appear on those sites. For example, one person searched their legal name and only one result came up. But when they searched the name they usually use online, many results appeared. This shows that the more you interact online, the more your data might be exposed.


Ways to Remove Your Data

The scan is free, but if you want the tool to remove your data, it offers a paid option. However, there are free ways to remove your information by yourself.

Most data broker sites have a page where you can ask them to delete your data. These pages are not always easy to find and often have names like “Opt-Out” or “Do Not Sell My Info.” But they are available and do work if you take the time to fill them out.

You can also use a feature from Google that allows you to request the removal of your personal data from its search results. This won’t delete the information from the original site, but it will make it harder for others to find it through a search engine. You can search for your name along with the site’s name and then ask Google to remove the result.


Other Tools That Can Help

If you don’t want to do this manually, there are paid services that handle the removal for you. These tools usually cost around $8 per month and can send deletion requests to hundreds of data broker sites.

It’s important to know what personal information of yours is available online. With this free tool from ExpressVPN, you can quickly check and take steps to protect your privacy. Whether you choose to handle removals yourself or use a service, taking action is a smart step toward keeping your data safe.

Google Now Scans Screenshots to Identify Geographic Locations

 


With the introduction of a new feature within Google Maps that is already getting mixed reviews from users, this update is already making headlines around the world. Currently available on iPhones, this update allows users to scan screenshots and photographs to identify and suggest places in real time. 

However, even though the technology has the potential to make life easier, such as helping users remember places they have visited or discovering new destinations, it raises valid privacy concerns as well. Even though it is a feature powered by artificial intelligence, there is not much clarity about its exact mechanism. However, it is known that the system analyses the content of images, which is a process involving the transmission of user data across multiple servers, including personal photos. 

Currently, the feature is exclusive to iOS devices, with Android support coming shortly afterwards -- an interesting delaybecauset Android is Google's native operating system. In fact, iPhone users have already been able to test it out by uploading older images to Google Maps and comparing them with locations that they know. 

As the update gains traction, it is likely to generate a lot of discussion regarding its usefulness and ethical implications. It has been reported that Google Maps has launched a new feature to streamline the user experience by making it easier for users to search for locations. Its functionality is currently available only for iPhones, but will soon be available for other devices as well. It allows the app to automatically detect and extract location information from screenshots taken from the device, such as the name of the place or address. 

As the location is identified, it is added to a personalised list, which allows users to return to those locations without having to type in the details manually. There is a particular benefit to using this enhancement for users who frequently take screenshots of places they intend to visit in the future, either from messages they receive from their friends and family, or from social media. As a result of this update, users no longer have to switch between their phone’s gallery and the Maps app manually to search for these places. This makes the process more seamless and intuitive since this friction has been eliminated. 

It is worth mentioning that Google's approach reflects a larger push toward automated convenience, but it also comes with several concerns about data usage and user consent in regards to this approach. In Google's opinion, the motivation behind this feature stems from a problem that many travellers face when researching destinations: keeping track of screenshots they have taken from travel blogs, social media posts, or news articles.

In most cases, these valuable bits of information get lost in the camera roll, which makes it difficult to recall or retrieve them when they are needed. This is an issue that Google is looking to address by improving the efficiency and stress of trip planning. Added to Google Maps' seamless way of surface-saving spots, users will never miss out on a must-visit location simply because it was buried among other images. 

When users authorize the app to access their photo library, it scans screenshots for recognizable place names and addresses and instantly lets them know if they do. If a location is able to be recognized successfully, the app will prompt the user, telling them that the place can now be reviewed and saved if the user wants to. With the advent of this functionality, what once existed only as a passive image archive will soon become an interactive tool for travel planning, providing both convenience and personal navigation. 

Firstly, users must ensure that on their iPhone or iPad device that Google Maps is updated to its latest version if they intend to take full advantage of this innovative feature. Once the app is up to date, the transition from screenshot to saved destination becomes almost effortless. By simply capturing screenshots of websites mentioning travel destinations, whether they come from blogs, articles, or curated recommendations lists, the app can recognize and retrieve valuable location information by simply taking screenshots. 

At the bottom of the screen, there is a new "Screenshots" tab, which appears prominently under the "You" section within the updated interface. The first time a user accesses this section for the first time, they are given a short overview of the functionality, and then they are prompted to grant the app access to the photo gallery of the device. Allowing the app to access this photo gallery allows it to use intelligence to scan for place names and landmarks in the images. 

Once Google Maps has been able to recognise a location within a screenshot, it will begin analysing the images to identify relevant geographical information embedded within them. By simply clicking on the “Scan screenshots” button, Google Maps immediately begins analysing your stored screenshots. Recognised locations are neatly organised into a list, allowing for selective curation. Once confirmed, these places are added to a custom list within the app once confirmed. 

Each additional location becomes a central hub, offering visuals, descriptions, navigation options, and the possibility of organising them into favourites or themed lists, allowing for easy organisation. The static images that once resided in the camera roll are now transformed into a dynamic planning tool that can be accessed from anywhere. The clever combination of AI and user behavior illustrates how technology can elevate everyday habits into smarter, more intuitive experiences by incorporating AI into everyday experiences.

As a further step toward making Google Maps a hands-off experience, you can also set it up so that it will automatically scan any future screenshots you take. With access to the entire photo library, the app continuously monitors and detects new screenshots that contain location information. It then places them directly into the “Screenshots” section without having to do any manual work at all. 

Users have the option of switching this feature on or off at any time, thus giving them complete control over how much of their photo content is being analyzed at any given time. Additionally, if you would rather take a more selective approach, the feature allows you to choose specific images to scan manually, which allows you to make the most of the available features.

In this way, users can take advantage of the convenience of the tool while maintaining their level of privacy and control as artificial intelligence continues to shape our everyday digital experiences, and the new Google Maps feature stands out as one of the best examples of how to combine automation with the ability to control it. 

A smarter, more intuitive way of planning and exploring is created by turning passive screenshots into actionable insights – allowing you to discover, save, and revisit locations in a more intuitive way. This latest update marks a significant milestone in Google Maps' evolution toward smarter, more intuitive navigation tools. By bridging visual content with location intelligence, it creates an enhanced user experience that aligns with the rising demand for efficiency and automation in the industry. 

The rise of artificial intelligence continues to shape the way digital platforms function. Features like screenshot scanning emphasize the necessity to maintain user control while enhancing convenience as a result of thoughtful innovation. As a result of this upgrade, both users and industry professionals will be able to enjoy seamless, context-aware travel planning, which is a reflection of the future.

Google to Launch Gemini AI for Children Under 13

Google to Launch Gemini AI for Children Under 13

Google plans to roll out its Gemini artificial intelligence chatbot next week for children younger than 13 with parent-managed Google accounts, as tech companies vie to attract young users with AI products.

Google will launch its Gemini AI chatbot soon for children below the age of 13 with parent-managed Google accounts. The move comes as tech companies try to attract young users with AI tools. According to a mail sent to a parent of an 8-year-old, Google apps will soon be available to a child. It means your child can use Gemini to ask questions, get homework help, and also create stories. 

That chatbot will be available to children whose guardians have Family Link, a Google feature that allows families to make Gmail and opt-in services like YouTube for their children. To register a child account, the parent gives the tech company the child’s personal information such as name and date of birth. 

According to Google spokesperson Karl Ryan, Gemini has concrete measures for younger users to restrict the chatbot from creating unsafe or harmful content. If a child with a Family Link account uses Gemini, the company can not use the data for training its AI model. 

Gemini for children can drive the use of chatbots among vulnerable populations as companies, colleges, schools, and others struggle with the effects of popular gen AI tech. The systems are trained on massive amounts of data sets to create human-like text and realistic images and videos. Google and other AI chatbot developers are battling fierce competition to get young users’ attention. 

Recently, President Donald Trump requested schools to embrace tools for teaching and learning. Millions of teens are already using chatbots for study help, virtual companions, and writing coaches. Experts have warned that chatbots could pose serious threats to child safety. 

The bots are known to sometimes make things up. UNICEF and other children's advocacy groups have found that AI systems can misinform, manipulate, and confuse young children who may face difficulties understanding that the chatbots are not humans. 

According to UNICEF’s global research office, “Generative AI has produced dangerous content,” posing risks for children. Google has acknowledged some risks, cautioning parents that “Gemini can make mistakes” and suggesting they “help your child think critically” about the chatbot. 

Do Not Charge Your Phone at Public Stations, Experts Warn

Do Not Charge Your Phone at Public Stations, Experts Warn

For a long time, smartphones have had a built-in feature that saves us against unauthorized access through USB. In Android and iOS, pop-ups ask us to confirm access before a data USB connection is established to transfer our data. 

But this defense is not enough to protect against “juice-jacking” — a hacking technique that manipulates charging stations to install malicious code, steal data, or enable access to the device while plugged in. Experts have found a severe flaw in this system that hackers can exploit easily. 

Cybersecurity researchers have discovered a serious loophole in this system that can be easily exploited. 

Hackers using new technique to hack smartphones via USB

According to experts, hackers can now use a new method called “choice jacking” to make sure that access to smartphones is easily verified without the user realizing it. 

First, the hackers deploy a feature on a charging station so that it looks like a USB keyboard when connected. After that, through USB Power Delivery, it runs a “USB PD Data Role Swap” to make a Bluetooth connection, activating the file transfer consent pop-up, and approving permission while acting as a Bluetooth keyboard. 

The hackers leverage the charging station to evade the protection mechanism on the device, which is aimed at protecting users against hacking attacks with USB peripherals. This can become a serious issue if the hacker gets access to all files and personal data stored on our smartphones to hack accounts. 

Experts at Graz University of Technology tried this technique on devices from a lot of manufacturers such as Samsung, which sells the second most smartphones besides Apple. All tested smartphones allowed the researchers to transfer data during the duration the screen was unlocked. 

No solution to this problem

Despite smartphone manufacturers being aware of the problem, there are not enough safety measures against juice-jacking, Only Google and Apply have implemented a solution, which requires users first to provide their PIN or password before they can use a device as authorized start and begin the data transfer. But, other manufacturers have not come up with efficient solutions to address this issue and offer protection.

If your smartphone has USB debugging enabled, it can be dangerous as USB debugging allows hackers to get access to the device via the Android Debug Bridge and deploy their own apps, run files, and generally use a higher access mode. 

How to be safe?

The easiest way users can protect themselves from juice-jacking attacks through USB charging stations is to never use a public charging station. Users should always avoid charging stations in busy areas such as airports and malls, they are the most dangerous. 

Users are advised to carry their power banks when traveling and always keep their smartphones updated.

New Report Reveals Hackers Now Aim for Money, Not Chaos

New Report Reveals Hackers Now Aim for Money, Not Chaos

Recent research from Mandiant revealed that financially motivated hackers are the new trend, with more than (55%) of criminal gangs active in 2024 aiming to steal or extort money from their targets, a sharp rise compared to previous years. 

About the report

The main highlight of the M-Trends report is that hackers are using every opportunity to advance their goals, such as using infostealer malware to steal credentials. Another trend is attacking unsecured data repositories due to poor security hygiene. 

Hackers are also exploiting fractures and risks that surface when an organization takes its data to the cloud. “In 2024, Mandiant initiated 83 campaigns and five global events and continued to track activity identified in previous years. These campaigns affected every industry vertical and 73 countries across six continents,” the report said. 

Ransomware-related attacks accounted for 21% of all invasions in 2024 and comprised almost two-thirds of cases related to monetization tactics. This comes in addition to data theft, email hacks, cryptocurrency scams, and North Korean fake job campaigns, all attempting to get money from targets. 

Exploits were amid the most popular primary infection vector at 33%, stolen credentials at 16%, phishing at 14%, web compromises at 9%, and earlier compromises at 8%. 

Finance in danger

Finance topped in the targeted industry, with more than 17% of attacks targeting the sector, followed closely by professional services and business (11%), critical industries such as high tech (10%), governments (10%), and healthcare (9%). 

Experts have highlighted a broader target of various industries, suggesting that anyone can be targeted by state-sponsored attacks, either politically or financially motivated.  

Stuart McKenzie, Managing Director, Mandiant Consulting EMEA. said “Financially motivated attacks are still the leading category. “While ransomware, data theft, and multifaceted extortion are and will continue to be significant global cybercrime concerns, we are also tracking the rise in the adoption of infostealer malware and the developing exploitation of Web3 technologies, including cryptocurrencies.” 

He also stressed that the “increasing sophistication and automation offered by artificial intelligence are further exacerbating these threats by enabling more targeted, evasive, and widespread attacks. Organizations need to proactively gather insights to stay ahead of these trends and implement processes and tools to continuously collect and analyze threat intelligence from diverse sources.”

Google Ends Privacy Sandbox, Keeps Third-Party Cookies in Chrome

 

Google has officially halted its years-long effort to eliminate third-party cookies from Chrome, marking the end of its once-ambitious Privacy Sandbox project. In a recent announcement, Anthony Chavez, VP of Privacy Sandbox, confirmed that the browser will continue offering users the choice to allow or block third-party cookies—abandoning its previous commitment to remove them entirely. 

Launched in 2020, Privacy Sandbox aimed to overhaul the way user data is collected and used for digital advertising. Instead of tracking individuals through cookies, Google proposed tools like the Topics API, which categorized users based on web behavior while promising stronger privacy protections. Despite this, critics claimed the project would ultimately serve Google’s interests more than users’ privacy or industry fairness. Privacy groups like the Electronic Frontier Foundation (EFF) warned users that the Sandbox still enabled behavioral tracking, and urged them to opt out. Meanwhile, regulators on both sides of the Atlantic scrutinized the initiative. 

In the UK, the Competition and Markets Authority (CMA) investigated the plan over concerns it would restrict competition by limiting how advertisers access user data. In the US, a federal judge recently ruled that Google engaged in deliberate anticompetitive conduct in the ad tech space—adding further pressure on the company. Originally intended to bring Chrome in line with browsers like Safari and Firefox, which block third-party cookies by default, the Sandbox effort repeatedly missed deadlines. In 2023, Google shifted its approach, saying users would be given the option to opt in rather than being automatically transitioned to the new system. Now, it appears the initiative has quietly folded. 

In his statement, Chavez acknowledged ongoing disagreements among advertisers, developers, regulators, and publishers about how to balance privacy with web functionality. As a result, Google will no longer introduce a standalone prompt to disable cookies and will instead continue with its current model of user control. The Movement for an Open Web (MOW), a vocal opponent of the Privacy Sandbox, described Google’s reversal as a victory. “This marks the end of their attempt to monopolize digital advertising by removing shared standards,” said MOW co-founder James Rosewell. “They’ve recognized the regulatory roadblocks are too great to continue.” 

With Privacy Sandbox effectively shelved, Chrome users will retain the ability to manage cookie preferences—but the web tracking status quo remains firmly in place.

Gmail Users Face a New Dilemma Between AI Features and Data Privacy

 



Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.

Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.

But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.

This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.

Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.

Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.

This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.

In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.