Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community. 

The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior. 

Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.  

A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks. 

“We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said. 

Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.” 

Ollama, Google and Anthropic did not comment. 

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

Exposed Admin Dashboard in AI Toy Put Children’s Data and Conversations at Risk

 

A routine investigation by a security researcher into an AI-powered toy revealed a serious security lapse that could have exposed sensitive information belonging to children and their families.

The issue came to light when security researcher Joseph Thacker examined an AI toy owned by a neighbor. In a blog post, Thacker described how he and fellow researcher Joel Margolis uncovered an unsecured admin interface linked to the Bondu AI toy.

Margolis identified a suspicious domain—console.bondu.com—referenced in the Content Security Policy headers of the toy’s mobile app backend. On visiting the domain, he found a simple option labeled “Login with Google.”

“By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. Instead, logging in granted access to Bondu’s core administrative dashboard.

“We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said.

AI Toy Admin Panel Exposed Children’s Conversations

Further analysis of the dashboard showed that the researchers had unrestricted visibility into “Every conversation transcript that any child has had with the toy,” spanning “tens of thousands of sessions.” The exposed panel also included extensive personal details about children and their households, such as:
  • Child’s name and date of birth
  • Names of family members
  • Preferences, likes, and dislikes
  • Parent-defined developmental objectives
  • The custom name assigned to the toy
  • Historical conversations used to provide context to the language model
  • Device-level data including IP-based location, battery status, and activity state
  • Controls to reboot devices and push firmware updates
The researchers also observed that the system relies on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).”

Beyond the authentication flaw, the team identified an Insecure Direct Object Reference (IDOR) vulnerability in the API. This weakness “allowed us to retrieve any child’s profile data by simply guessing their ID.”

“This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.”

Bondu Responds Within Minutes

Margolis contacted Bondu’s CEO via LinkedIn over the weekend, prompting the company to disable access to the exposed console “within 10 minutes.”

“Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said.

Bondu also initiated a broader security review, searched for additional vulnerabilities, and launched a bug bounty program. After reviewing console access logs, the company stated that no unauthorized parties had accessed the system aside from the researchers, preventing what could have become a data breach.

Despite the swift and responsible response, the incident changed Thacker’s perspective on AI-driven toys.

“To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.”

“AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.”

He further noted that, beyond data security concerns, AI introduces new risks at home. “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said.

Bondu’s website maintains that the toy was designed with safety as a priority, stating that its “safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period.”

Google’s Project Genie Signals a Major Shift for the Gaming Industry

 

Google has sent a strong signal to the video game sector with the launch of Project Genie, an experimental AI world-model that can create explorable 3D environments using simple text or image prompts.

Although Google’s Genie AI has been known since 2024, its integration into Project Genie marks a significant step forward. The prototype is now accessible to Google AI Ultra subscribers in the US and represents one of Google’s most ambitious AI experiments to date.

Project Genie is being introduced through Google Labs, allowing users to generate short, interactive environments that can be explored in real time. Built on DeepMind’s Genie 3 world-model research, the system lets users move through AI-generated spaces, tweak prompts, and instantly regenerate variations. However, it is not positioned as a full-scale game engine or production-ready development tool.

Demonstrations on the Project Genie website showcase a variety of scenarios, including a cat roaming a living room from atop a Roomba, a vehicle traversing the surface of a rocky moon, and a wingsuit flyer gliding down a mountain. These environments remain navigable in real time, and while the worlds are generated dynamically as characters move, consistency is maintained. Revisiting areas does not create new terrain, and any changes made by an agent persist as long as the system retains sufficient memory.

"Genie 3 environments are … 'auto-regressive' – created frame by frame based on the world description and user actions," Google explains on Genie's website. "The environments remain largely consistent for several minutes, with memory recalling changes from specific interactions for up to a minute."

Despite these capabilities, time constraints remain a challenge.

"The model can support a few minutes of continuous interaction, rather than extended hours," Google said, adding elsewhere that content generation is currently capped at 60 seconds. A Google spokesperson told The Register that Genie can render environments beyond that limit, but the company "found 60 seconds provides a high quality and consistent world, and it gives people enough time to explore and experience the environment."

Google stated that world consistency lasts throughout an entire session, though it remains unclear whether session durations will be expanded in the future. Beyond time limits, the system has other restrictions.

Agents in Genie’s environments are currently limited in the actions they can perform, and interactions between multiple agents are unreliable. The model struggles with readable text, lacks accurate real-world simulation, and can suffer from lag or delayed responses. Google also acknowledged that some previously announced features are missing.

In addition, "A few of the Genie 3 model capabilities we announced in August, such as promptable events that change the world as you explore it, are not yet included in this prototype," Google added.

"A world model simulates the dynamics of an environment, predicting how they evolve and how actions affect them," the company said of Genie. "While Google DeepMind has a history of agents for specific environments like Chess or Go, building AGI requires systems that navigate the diversity of the real world."

Game Developers Face an Uncertain Future

Beyond AGI research, Google also sees potential applications for Genie within the gaming industry—an area already under strain. While Google emphasized that Genie "is not a game engine and can’t create a full game experience," a spokesperson told The Register, "we are excited to see the potential to augment the creative process, enhancing ideation, and speeding up prototyping."

Industry data suggests this innovation arrives at a difficult time. A recent Informa Game Developers Conference report found that 33 percent of US game developers and 28 percent globally experienced at least one layoff over the past two years. Half of respondents said their employer had conducted layoffs within the last year.

Concerns about AI’s role are growing. According to the same survey, 52 percent of industry professionals believe AI is negatively affecting the games sector—up sharply from 30 percent last year and 18 percent the year before. The most critical views came from professionals working in visual and technical art, narrative design, programming, and game design.

One machine learning operations employee summed up those fears bluntly.

"We are intentionally working on a platform that will put all game devs out of work and allow kids to prompt and direct their own content," the GDC study quotes the respondent as saying.

While Project Genie still has clear technical limitations, the rapid pace of AI development suggests those gaps may not last long—raising difficult questions about the future of game development.

Anthropic Cracks Down on Claude Code Spoofing, Tightens Access for Rivals and Third-Party Tools

 

Anthropic has rolled out a new set of technical controls aimed at stopping third-party applications from impersonating its official coding client, Claude Code, to gain cheaper access and higher usage limits to Claude AI models. The move has directly disrupted workflows for users of popular open-source coding agents such as OpenCode.

At the same time—but through a separate enforcement action—Anthropic has also curtailed the use of its models by competing AI labs, including xAI, which accessed Claude through the Cursor integrated development environment. Together, these steps signal a tightening of Anthropic’s ecosystem as demand for Claude Code surges.

The anti-spoofing update was publicly clarified on Friday by Thariq Shihipar, a Member of Technical Staff at Anthropic working on Claude Code. Writing on X (formerly Twitter), Shihipar said the company had "tightened our safeguards against spoofing the Claude Code harness." He acknowledged that the rollout caused unintended side effects, explaining that some accounts were automatically banned after triggering abuse detection systems—an issue Anthropic says it is now reversing.

While those account bans were unintentional, the blocking of third-party integrations themselves appears to be deliberate.

Why Harnesses Were Targeted

The changes focus on so-called “harnesses”—software wrappers that control a user’s web-based Claude account via OAuth in order to automate coding workflows. Tools like OpenCode achieved this by spoofing the client identity and sending headers that made requests appear as if they were coming from Anthropic’s own command-line interface.

This effectively allowed developers to link flat-rate consumer subscriptions, such as Claude Pro or Max, with external automation tools—bypassing the intended limits of plans designed for human, chat-based use.

According to Shihipar, technical instability was a major motivator for the block. Unauthorized harnesses can introduce bugs and usage patterns that Anthropic cannot easily trace or debug. When failures occur in third-party wrappers like OpenCode or certain Cursor configurations, users often blame the model itself, which can erode trust in the platform.

The Cost Question and the “Buffet” Analogy

Developers, however, have largely framed the issue as an economic one. In extended discussions on Hacker News, users compared Claude’s consumer subscriptions to an all-you-can-eat buffet: Anthropic offers a flat monthly price—up to $200 for Max—but controls consumption speed through its official Claude Code tool.

Third-party harnesses remove those speed limits. Autonomous agents running inside tools like OpenCode can execute intensive loops—writing code, running tests, fixing errors—continuously and unattended, often overnight. At that scale, the same usage would be prohibitively expensive under per-token API pricing.

"In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API," wrote Hacker News user dfabulich.

By cutting off spoofed harnesses, Anthropic is effectively pushing heavy automation into two approved channels: its metered Commercial API, or Claude Code itself, where execution speed and environment constraints are fully controlled.

Community Reaction and Workarounds

The response from developers has been swift and mixed. Some criticized the move as hostile to users. "Seems very customer hostile," wrote Danish programmer David Heinemeier Hansson (DHH), creator of Ruby on Rails, in a post on X.

Others were more understanding. "anthropic crackdown on people abusing the subscription auth is the gentlest it could’ve been," wrote Artem K aka @banteg on X. "just a polite message instead of nuking your account or retroactively charging you at api prices."

The OpenCode team moved quickly, launching a new $200-per-month tier called OpenCode Black that reportedly routes usage through an enterprise API gateway rather than consumer OAuth. OpenCode creator Dax Raad also announced plans to work with Anthropic rival OpenAI so users could access Codex directly within OpenCode, punctuating the announcement with a Gladiator GIF captioned "Are you not entertained?"

The xAI and Cursor Enforcement

Running parallel to the technical crackdown, developers at Elon Musk’s AI lab xAI reportedly lost access to Claude models around the same time. While the timing suggested coordination, sources indicate this was a separate action rooted in Anthropic’s commercial terms.

As reported by tech journalist Kylie Robison of Core Memory, xAI staff had been using Claude models through the Cursor IDE to accelerate internal development. "Hi team, I believe many of you have already discovered that Anthropic models are not responding on Cursor," wrote xAI co-founder Tony Wu in an internal memo. "According to Cursor this is a new policy Anthropic is enforcing for all its major competitors."

Anthropic’s Commercial Terms of Service explicitly prohibit using its services to build or train competing AI systems. In this case, Cursor itself was not the issue; rather, xAI’s use of Claude through the IDE for competitive research triggered the block.

This is not the first time Anthropic has cut off access to protect its models. In August 2025, the company revoked OpenAI’s access to the Claude API under similar circumstances. At the time, an Anthropic spokesperson said, "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools."

Earlier, in June 2025, the coding environment Windsurf was abruptly informed that Anthropic was cutting off most first-party capacity for Claude 3.x models. Windsurf was forced to pivot to a bring-your-own-key model and promote alternatives like Google’s Gemini.

Together with the xAI and OpenCode actions, these incidents underscore a consistent message: Anthropic will sever access when usage threatens its business model or competitive position.

Claude Code’s Rapid Rise

The timing of the crackdowns closely follows a dramatic surge in Claude Code’s popularity. Although released in early 2025, it remained niche until December 2025 and early January 2026, when community-driven experimentation—popularized by the so-called “Ralph Wiggum” plugin—demonstrated powerful self-healing coding loops.

The real prize, however, was not the Claude Code interface itself but the underlying Claude Opus 4.5 model. By spoofing the official client, third-party tools allowed developers to run large-scale autonomous workflows on Anthropic’s most capable reasoning model at a flat subscription price—effectively arbitraging consumer pricing against enterprise-grade usage.

As developer Ed Andersen noted on X, some of Claude Code’s popularity may have been driven by this very behavior.

For enterprise AI teams, the message is clear: pipelines built on unofficial wrappers or personal subscriptions carry significant risk. While flat-rate tools like OpenCode reduced costs, Anthropic’s enforcement highlights the instability and compliance issues they introduce.

Organizations now face a trade-off between predictable subscription fees and variable, per-token API costs—but with the benefit of guaranteed support and stability. From a security standpoint, the episode also exposes the dangers of “Shadow AI,” where engineers quietly bypass enterprise controls using spoofed credentials.

As Anthropic consolidates control over access to Claude’s models, the reliability of official APIs and sanctioned tools is becoming more important than short-term cost savings. In this new phase of the AI arms race, unrestricted access to top-tier reasoning models is no longer a given—it’s a privilege tightly guarded by their creators.

India Cracks Down on Grok's AI Image Misuse

 

The Ministry of Electronics and Information Technology (MeitY) of India has found that the latest restrictions on Grok’s image generation tool by X are not adequate to prevent obscene content. The platform, owned by Elon Musk, restricted the controversial feature, known as Grok Imagine, to paid subscribers across the globe. The feature was removed to prevent free users on the platform from creating abusive images. However, officials have argued that allowing such image generation violates Indian laws on privacy and dignity, especially regarding women and children. 

Grok Imagine, available on X and as a separate app, has shown a rise in pornographic and abusive images, including non-consensual images of real people, including children, being naked. The feature, known as Spicy Mode, which produced such images, sparked anger across India, the United Kingdom, Türkiye, Malaysia, Brazil, and the European Union. The feature allowed users to create images of people being undressed, including images of women being dressed in bikinis. The feature sparked anger among members of Parliament in India. 

X's partial fixes fall short 

On 2 January 2026, MeitY ordered X to remove all vulgar images generated on the platform within 72 hours. The order also required X to provide a report on actions taken to comply with the order. The response from X mentioned stricter filters on images. However, officials have argued that X failed to provide adequate technical details on steps taken to prevent such images from being generated. The officials have also stated that the website of Grok allows users to create images for free. 

X now restricts image generation and editing via @Grok replies to premium users, but loopholes persist: the Grok app and website remain open to all, and X's image edit button is accessible platform-wide. Grok stated illegal prompts face the same penalties as uploads, yet regulators demand proactive safeguards. MeitY seeks comprehensive measures to block obscene outputs entirely. 

This clash highlights rising global scrutiny on AI tools lacking robust guardrails against deepfakes and harm. India's IT Rules 2021 mandate swift content removal, with non-compliance risking liability for platforms and executives.As X refines Grok, the case underscores the need for ethical AI design amid tech's rapid evolution, balancing innovation with societal protection.

Raspberry Pi Project Turns Wi-Fi Signals Into Visual Light Displays

 



Wireless communication surrounds people at all times, even though it cannot be seen. Signals from Wi-Fi routers, Bluetooth devices, and mobile networks constantly travel through homes and cities unless blocked by heavy shielding. A France-based digital artist has developed a way to visually represent this invisible activity using light and low-cost computing hardware.

The creator, Théo Champion, who is also known online as Rootkid, designed an installation called Spectrum Slit. The project captures radio activity from commonly used wireless frequency ranges and converts that data into a visual display. The system focuses specifically on the 2.4 GHz and 5 GHz bands, which are widely used for Wi-Fi connections and short-range wireless communication.

The artwork consists of 64 vertical LED filaments arranged in a straight line. Each filament represents a specific portion of the wireless spectrum. As radio signals are detected, their strength and density determine how brightly each filament lights up. Low signal activity results in faint and scattered illumination, while higher levels of wireless usage produce intense and concentrated light patterns.

According to Champion, quiet network conditions create a subtle glow that reflects the constant but minimal background noise present in urban environments. As wireless traffic increases, the LEDs become brighter and more saturated, forming dense visual bands that indicate heavy digital activity.

A video shared on YouTube shows the construction process and the final output of the installation inside Champion’s Paris apartment. The footage demonstrates a noticeable increase in brightness during evening hours, when nearby residents return home and connect phones, laptops, and other devices to their networks.

Champion explained in an interview that his work is driven by a desire to draw attention to technologies people often ignore, despite their significant influence on daily life. By transforming technical systems into physical experiences, he aims to encourage viewers to reflect on the infrastructure shaping modern society and to appreciate the engineering behind it.

The installation required both time and financial investment. Champion built the system using a HackRF One software-defined radio connected to a Raspberry Pi. The radio device captures surrounding wireless signals, while the Raspberry Pi processes the data and controls the lighting behavior. The software was written in Python, but other components, including the metal enclosure and custom circuit boards, had to be professionally manufactured.

He estimates that development involved several weeks of experimentation, followed by a dedicated build phase. The total cost of materials and fabrication was approximately $1,000.

Champion has indicated that Spectrum Slit may be publicly exhibited in the future. He is also known for creating other technology-focused artworks, including interactive installations that explore data privacy, artificial intelligence, and digital systems. He has stated that producing additional units of Spectrum Slit could be possible if requested.

How Generative AI Is Accelerating Password Attacks on Active Directory

 

Active Directory remains the backbone of identity management for most organizations, which is why it continues to be a prime target for cyberattacks. What has shifted is not the focus on Active Directory itself, but the speed and efficiency with which attackers can now compromise it.

The rise of generative AI has dramatically reduced the cost and complexity of password-based attacks. Tasks that once demanded advanced expertise and substantial computing resources can now be executed far more easily and at scale.

Tools such as PassGAN mark a significant evolution in password-cracking techniques. Instead of relying on static wordlists or random brute-force attempts, these systems use adversarial learning to understand how people actually create passwords. With every iteration, the model refines its predictions based on real-world behavior.

The impact is concerning. Research indicates that PassGAN can crack 51% of commonly used passwords in under one minute and 81% within a month. The pace at which these models improve only increases the risk.

When trained using organization-specific breach data, public social media activity, or information from company websites, AI models can produce highly targeted password guesses that closely mirror employee habits.

How generative AI is reshaping password attack methods

Earlier password attacks followed predictable workflows. Attackers relied on dictionary lists, applied rule-based tweaks—such as replacing letters with symbols or appending numbers—and waited for successful matches. This approach was slow and computationally expensive.
  • Pattern recognition at scale: Machine learning systems identify nuanced behaviors in password creation, including keyboard habits, substitutions, and the use of personal references. Instead of wasting resources on random guesses, attackers concentrate computing power on the most statistically likely passwords.
  • Smart credential variation: When leaked credentials are obtained from external breaches, AI can generate environment-specific variations. If “Summer2024!” worked elsewhere, the model can intelligently test related versions such as “Winter2025!” or “Spring2025!” rather than guessing blindly.
  • Automated intelligence gathering: Large language models can rapidly process publicly available data—press releases, LinkedIn profiles, product names—and weave that context into phishing campaigns and password spray attacks. What once took hours of manual research can now be completed in minutes.
  • Reduced technical barriers: Pre-trained AI models and accessible cloud infrastructure mean attackers no longer need specialized skills or costly hardware. The increased availability of high-performance consumer GPUs has unintentionally strengthened attackers’ capabilities, especially when organizations rent out unused GPU capacity.
Today, for roughly $5 per hour, attackers can rent eight RTX 5090 GPUs capable of cracking bcrypt hashes about 65% faster than previous generations.

Even when strong hashing algorithms and elevated cost factors are used, the sheer volume of password guesses now possible far exceeds what was realistic just a few years ago. Combined with AI-generated, high-probability guesses, the time needed to break weak or moderately strong passwords has dropped significantly.

Why traditional password policies are no longer enough

Many Active Directory password rules were designed before AI-driven threats became mainstream. Common complexity requirements—uppercase letters, lowercase letters, numbers, and symbols—often result in predictable structures that AI models are well-equipped to exploit.

"Password123!" meets complexity rules but follows a pattern that generative models can instantly recognize.

Similarly, enforced 90-day password rotations have lost much of their defensive value. Users frequently make minor, predictable changes such as adjusting numbers or referencing seasons. AI systems trained on breach data can anticipate these habits and test them during credential stuffing attacks.

While basic multi-factor authentication (MFA) adds protection, it does not eliminate the risks posed by compromised passwords. If attackers bypass MFA through tactics like social engineering, session hijacking, or MFA fatigue, access to Active Directory may still be possible.

Defending Active Directory against AI-assisted attacks

Countering AI-enhanced threats requires moving beyond compliance-driven controls and focusing on how passwords fail in real-world attacks. Password length is often more effective than complexity alone.

AI models struggle more with long, random passphrases than with short, symbol-heavy strings. An 18-character passphrase built from unrelated words presents a much stronger defense than an 8-character complex password.

Equally critical is visibility into whether employee passwords have already appeared in breach datasets. If a password exists in an attacker’s training data, hashing strength becomes irrelevant—the attacker simply uses the known credential.

Specops Password Policy and Breached Password Protection help organizations defend against over 4 billion known unique compromised passwords, including those that technically meet complexity rules but have already been stolen by malware.

The solution updates daily using real-world attack intelligence, ensuring protection against newly exposed credentials. Custom dictionaries that block company-specific terminology—such as product names, internal jargon, and brand references—further reduce the effectiveness of AI-driven reconnaissance.

When combined with passphrase support and robust length requirements, these measures significantly increase resistance to AI-generated password guessing.

Before applying new controls, organizations should assess their existing exposure. Specops Password Auditor provides a free, read-only scan of Active Directory to identify weak passwords, compromised credentials, and policy gaps—without altering the environment.

This assessment helps pinpoint where AI-powered attacks are most likely to succeed.

Generative AI has fundamentally shifted the balance of effort in password attacks, giving adversaries a clear advantage.

The real question is no longer whether defenses need to be strengthened, but whether organizations will act before their credentials appear in the next breach.

Nvidia Introduces New AI Platform to Advance Self-driving Vehicle Technology

 



Nvidia is cementing its presence in the autonomous vehicle space by introducing a new artificial intelligence platform designed to help cars make decisions in complex, real-world conditions. The move reflects the company’s broader strategy to take AI beyond digital tools and embed it into physical systems that operate in public environments.

The platform, named Alpamayo, was introduced by Nvidia chief executive Jensen Huang during a keynote address at the Consumer Electronics Show in Las Vegas. According to the company, the system is built to help self-driving vehicles reason through situations rather than simply respond to sensor inputs. This approach is intended to improve safety, particularly in unpredictable traffic conditions where human judgment is often required.

Nvidia says Alpamayo enables vehicles to manage rare driving scenarios, operate smoothly in dense urban settings, and provide explanations for their actions. By allowing a car to communicate what it intends to do and why, the company aims to address long-standing concerns around transparency and trust in autonomous driving technology.

As part of this effort, Nvidia confirmed a collaboration with Mercedes-Benz to develop a fully driverless vehicle powered by the new platform. The company stated that the vehicle is expected to launch first in the United States within the next few months, followed by expansion into European and Asian markets.

Although Nvidia is widely known for the chips that support today’s AI boom, much of the public focus has remained on software applications such as generative AI systems. Industry attention is now shifting toward physical uses of AI, including vehicles and robotics, where decision-making errors can have serious consequences.

Huang noted that Nvidia’s work on autonomous systems has provided valuable insight into building large-scale robotic platforms. He suggested that physical AI is approaching a turning point similar to the rapid rise of conversational AI tools in recent years.

A demonstration shown at the event featured a Mercedes-Benz vehicle navigating the streets of San Francisco without driver input, while a passenger remained seated behind the wheel with their hands off. Nvidia explained that the system was trained using human driving behavior and continuously evaluates each situation before acting, while also explaining its decisions in real time.

Nvidia also made the Alpamayo model openly available, releasing its core code on the machine learning platform Hugging Face. The company said this would allow researchers and developers to freely access and retrain the system, potentially accelerating progress across the autonomous vehicle industry.

The announcement places Nvidia in closer competition with companies already offering advanced driver-assistance and autonomous driving systems. Industry observers note that while achieving high levels of accuracy is possible, addressing rare and unusual driving scenarios remains a major technical hurdle.

Nvidia further revealed plans to introduce a robotaxi service next year in partnership with another company, although it declined to disclose the partner’s identity or the locations where the service will operate.

The company currently holds the position of the world’s most valuable publicly listed firm, with a market capitalization exceeding 4.5 trillion dollars, or roughly £3.3 trillion. It briefly became the first company to reach a valuation of 5 trillion dollars in October, before losing some value amid investor concerns that expectations around AI demand may be inflated.

Separately, Nvidia confirmed that its next-generation Rubin AI chips are already being manufactured and are scheduled for release later this year. The company said these chips are designed to deliver strong computing performance while using less energy, which could help reduce the cost of developing and deploying AI systems.

AI Expert Warns World Is Running Out of Time to Tackle High-Risk AI Revolution

 

AI safety specialist David Dalrymple has warned in no unclear terms that humanity may be running out of time to get ready for the dangers of fast-moving artificial intelligence. When talking to The Guardian, the director of programme at the UK government’s Advanced Research and Invention Agency (ARIA) emphasised that AI development is progressing “really fast,” and that no society can safely take these systems being reliable for granted. He is the latest authoritative figure to add to the escalating global anxiety that deployment is outstripping safety research and governance models. 

Dalrymple contended that the existential risk is from AI systems that can do virtually all economically valuable human work but more quickly, at lower cost and at a higher quality. In his mind, these intellectual systems might “outcompete” humans in the very domains that constitute our control over civilization, society and perhaps even planetary-scale decisions. And not just about losing jobs, but about losing strategic dominance in vital sectors, from security to infrastructure management.

He described a scenario in which AI capabilities race ahead of safety mechanisms, triggering destabilisation across both the security landscape and the broader economy. Dalrymple emphasised an urgent need for more technical research into understanding and controlling the behaviour of advanced AI, particularly as systems become more autonomous and integrated into vital services. Without this work, he suggested, governments and institutions risk deploying tools whose failure modes and emergent properties they barely understand. 

 Dalrymple, who among other things consults with ARIA on creating protections for AI systems used in critical infrastructure like energy grids, warned that it is “very dangerous” for policymakers to believe advanced AI will just work as they want it to. He noted that the science needed to fully guarantee reliability is unlikely to emerge in time, given the intense economic incentives driving rapid deployment. As a result, he argued the “next best” strategy is aggressively focusing on controlling and mitigating the downsides, even if perfect assurance is out of reach. 

The AI expert also said that by late 2026, AI systems may be able to do a full day of R&D, including self-improvement in such AI-related fields as mathematics and computer science. Such an innovation would give a further jolt to AI capabilities, and bring society more deeply into what he described as a “high-risk” transition that civilization is mostly “sleepwalking” into. And while he conceded that unsettling developments can ultimately yield benefits, he said the road we appear to be on is one that holds a lot of peril for if safety continues to lag behind capability.

Geopolitical Conflict Is Increasing the Risk of Cyber Disruption




Cybersecurity is increasingly shaped by global politics. Armed conflicts, economic sanctions, trade restrictions, and competition over advanced technologies are pushing countries to use digital operations as tools of state power. Cyber activity allows governments to disrupt rivals quietly, without deploying traditional military force, making it an attractive option during periods of heightened tension.

This development has raised serious concerns about infrastructure safety. A large share of technology leaders fear that advanced cyber capabilities developed by governments could escalate into wider cyber conflict. If that happens, systems that support everyday life, such as electricity, water supply, and transport networks, are expected to face the greatest exposure.

Recent events have shown how damaging infrastructure failures can be. A widespread power outage across parts of the Iberian Peninsula was not caused by a cyber incident, but it demonstrated how quickly modern societies are affected when essential services fail. Similar disruptions caused deliberately through cyber means could have even more severe consequences.

There have also been rare public references to cyber tools being used during political or military operations. In one instance, U.S. leadership suggested that cyber capabilities were involved in disrupting electricity in Caracas during an operation targeting Venezuela’s leadership. Such actions raise concerns because disabling utilities affects civilians as much as strategic targets.

Across Europe, multiple incidents have reinforced these fears. Security agencies have reported attempts to interfere with energy infrastructure, including dams and national power grids. In one case, unauthorized control of a water facility allowed water to flow unchecked for several hours before detection. In another, a country narrowly avoided a major blackout after suspicious activity targeted its electricity network. Analysts often view these incidents against the backdrop of Europe’s political and military support for Ukraine, which has been followed by increased tension with Moscow and a rise in hybrid tactics, including cyber activity and disinformation.

Experts remain uncertain about the readiness of smart infrastructure to withstand complex cyber operations. Past attacks on power grids, particularly in Eastern Europe, are frequently cited as warnings. Those incidents showed how coordinated intrusions could interrupt electricity for millions of people within a short period.

Beyond physical systems, the information space has also become a battleground. Disinformation campaigns are evolving rapidly, with artificial intelligence enabling the fast creation of convincing false images and videos. During politically sensitive moments, misleading content can spread online within hours, shaping public perception before facts are confirmed.

Such tactics are used by states, political groups, and other actors to influence opinion, create confusion, and deepen social divisions. From Eastern Europe to East Asia, information manipulation has become a routine feature of modern conflict.

In Iran, ongoing protests have been accompanied by tighter control over internet access. Authorities have restricted connectivity and filtered traffic, limiting access to independent information. While official channels remain active, these measures create conditions where manipulated narratives can circulate more easily. Reports of satellite internet shutdowns were later contradicted by evidence that some services remained available.

Different countries engage in cyber activity in distinct ways. Russia is frequently associated with ransomware ecosystems, though direct state involvement is difficult to prove. Iran has used cyber operations alongside political pressure, targeting institutions and infrastructure. North Korea combines cyber espionage with financially motivated attacks, including cryptocurrency theft. China is most often linked to long-term intelligence gathering and access to sensitive data rather than immediate disruption.

As these threats manifest into serious matters of concern, cybersecurity is increasingly viewed as an issue of national control. Governments and organizations are reassessing reliance on foreign technology and cloud services due to legal, data protection, and supply chain concerns. This shift is already influencing infrastructure decisions and is expected to play a central role in security planning as global instability continues into 2026.

AI Can Answer You, But Should You Trust It to Guide You?



Artificial intelligence tools are expanding faster than any digital product seen before, reaching hundreds of millions of users in a short period. Leading technology companies are investing heavily in making these systems sound approachable and emotionally responsive. The goal is not only efficiency, but trust. AI is increasingly positioned as something people can talk to, rely on, and feel understood by.

This strategy is working because users respond more positively to systems that feel conversational rather than technical. Developers have learned that people prefer AI that is carefully shaped for interaction over systems that are larger but less refined. To achieve this, companies rely on extensive human feedback to adjust how AI responds, prioritizing politeness, reassurance, and familiarity. As a result, many users now turn to AI for advice on careers, relationships, and business decisions, sometimes forming strong emotional attachments.

However, there is a fundamental limitation that is often overlooked. AI does not have personal experiences, beliefs, or independent judgment. It does not understand success, failure, or responsibility. Every response is generated by blending patterns from existing information. What feels like insight is often a safe and generalized summary of commonly repeated ideas.

This becomes a problem when people seek meaningful guidance. Individuals looking for direction usually want practical insight based on real outcomes. AI cannot provide that. It may offer comfort or validation, but it cannot draw from lived experience or take accountability for results. The reassurance feels real, while the limitations remain largely invisible.

In professional settings, this gap is especially clear. When asked about complex topics such as pricing or business strategy, AI typically suggests well-known concepts like research, analysis, or optimization. While technically sound, these suggestions rarely address the challenges that arise in specific situations. Professionals with real-world experience know which mistakes appear repeatedly, how people actually respond to change, and when established methods stop working. That depth cannot be replicated by generalized systems.

As AI becomes more accessible, some advisors and consultants are seeing clients rely on automated advice instead of expert guidance. This shift favors convenience over expertise. In response, some professionals are adapting by building AI tools trained on their own methods and frameworks. In these cases, AI supports ongoing engagement while allowing experts to focus on judgment, oversight, and complex decision-making.

Another overlooked issue is how information shared with generic AI systems is used. Personal concerns entered into such tools do not inform better guidance or future improvement by a human professional. Without accountability or follow-up, these interactions risk becoming repetitive rather than productive.

Artificial intelligence can assist with efficiency, organization, and idea generation. However, it cannot lead, mentor, or evaluate. It does not set standards or care about outcomes. Treating AI as a substitute for human expertise risks replacing growth with comfort. Its value lies in support, not authority, and its effectiveness depends on how responsibly it is used.

TikTok Algorithm's US Fate: Joint Venture Secures Control Amid Ownership Clouds

 

One of the most important components of TikTok’s success has been its powerful recommendation algorithm, although its usefulness in the United States is contingent upon a new binding joint venture agreement with ByteDance. Dubbed by some as “TikTok’s crown jewel,” this technology is currently under intense scrutiny due to national security concerns.

In the latter part of 2025, ByteDance signed binding deals to form a joint venture in the United States, headed by Oracle, Silver Lake, and MGX. This deal will transfer control of TikTok’s U.S. app to American and foreign investors, with a planned completion date of January 22, 2026. The aim is to avoid a ban and to separate the handling of U.S. data from ByteDance’s control, while the parent company holds a 19.9% stake.

However, there is still some uncertainty as to the final ownership of the algorithm, considering ByteDance’s previous commitment to wind down TikTok in the United States rather than sell it. As per the agreement, the joint venture will be responsible for the management of U.S. user data, content moderation, and the security of the algorithm, and will also retrain the algorithm exclusively on U.S. data obtained by Oracle. The revenue streams, including advertising and e-commerce, will be handled by a ByteDance subsidiary, with revenue shared with the joint venture. 

China’s export control regime in 2020 requires government approval for the transfer of algorithms or source code, making it difficult to share them across borders, and it is unclear what ByteDance’s stance is on this matter. There are also debates about whether ByteDance has completely relinquished control of the technology or simply licensed it, with some comparing Oracle’s role to that of a monitor.

The algorithm of TikTok is characterized by its focus on “interest signals” and not social graphs, a strategy employed by other rival companies such as Meta, which adjusts itself according to the changing interests of users, including their fluctuations on a daily or hourly basis. Along with the short video format and the mobile-first approach, this strategy results in highly personalized feeds, which can give a competitive edge to TikTok over other late entrants like Instagram Reels (2020) and YouTube Shorts (2021).

The complexity of the algorithm is supported by empirical research. A study conducted in the US and Germany among 347 participants, including automated agents, found that the algorithm “exploits” users’ interests in 30-50% of recommendations, showing exploratory content beyond users’ established preferences to improve the algorithm or extend the session length. This serendipitous blending of familiarity and discovery is seen as key to user retention by TikTok executives.

Cybersecurity Falls Behind as Threat Scale Outpaces Capabilities


Cyber defence is entering its 2026 year with the balance of advantage increasingly being determined by speed rather than sophistication. With the window between intrusion and impact now measured in minutes rather than days instead of days, the advantage is increasingly being gained by speed. 

As breakout times fall below an hour and identity-based compromise replaces malware as the dominant method of entry into enterprise environments, threat actors are now operating faster, quieter, and with greater precision than ever before. 

By making use of artificial intelligence, phishing, fraud, and reconnaissance can be executed at unprecedented scales, with minimal technical knowledge, which is a decisive accelerator for the phishing, fraud, and reconnaissance industries. As a result of the commoditization, automation, and availability of capabilities once requiring specialized skills, they have lowered the barrier to entry for attackers dramatically. 

There is an increased threat of "adaptive, fast-evolving threats" that organizations must deal with, and one of the main factors that has contributed to this is the rapid and widespread adoption of artificial intelligence across both offensive and defensive cyber operations. Moody's Ratings describes this as leading to a "new era of adaptive, fast-evolving threats". 

A key reality for chief information security officers, boards of directors, and enterprise risk leaders is highlighted in the firm's 2026 Cyber Risk Outlook: Artificial intelligence isn't just another tool in cybersecurity, but is reshaping the velocity, scale, and unpredictability of cyber risk, impacting both the management, assessment, and governance of cyber risks across a broad range of sectors. 

While years have been spent investing and innovating in enterprise security, the failure of enterprise security rarely occurs as a consequence of a lack of tools or advanced technology; rather, failure is more frequently a result of operating models that place excessive and misaligned expectations on human defenders, forcing them to perform repetitive, high-stakes tasks with fragmented and incomplete information in order to accomplish their objectives. 

Modern threat landscapes have changed considerably from what was originally designed to protect static environments to the dynamic environment the models were built to protect. Attack surfaces are constantly changing as endpoints change their states, cloud resources are continually being created and retired, and mobile and operational technologies are continuously extending exposures well beyond traditional perimeters. 

There has been a gradual increase in threat actors exploiting this fluidity, putting together minor vulnerabilities one after another, confident that eventually defenders will not be able to keep up with them. 

A huge gap exists between the speed of the environment and the limits of human-centered workflows, as security teams continue to heavily rely on manual processes for assessing alerts, establishing context, and determining when actions should be taken. 

Often, attempts to remedy this imbalance through the addition of additional security products have compounded the issue, increasing operational friction, as tools overlap, alert fatigue is created, and complex handoffs are required. 

Despite the fact that automation has eased some of this burden, it still has to do with human-defined rules, approvals, and thresholds, leaving many companies with security programs that may appear sophisticated at first glance but remain too slow to respond rapidly, decisively, in crisis situations. Various security assessments from global bodies have reinforced the fact that artificial intelligence is rapidly changing both cyber risk and its scale.

In a report from Cloud Security Alliance (CSA), AI has been identified as one of the most important trends for years now, with further improvements and increased adoption expected to accelerate its impact across the threat landscape as a whole. It is cautioned by the CSA that, while these developments offer operational benefits, malicious actors may also be able to take advantage of them, especially through the increase of social engineering and fraud effectiveness. 

AI models are being trained on increasingly large data sets, making their output more convincing and operationally useful, and thus making it possible for threat actors to replicate research findings and translate them directly into attack campaigns based on their findings.

CSA believes that generative AI is already lowering the barriers to more advanced forms of cybercrime, including automated hacking as well as the potential emergence of artificial intelligence-enabled worms, according to the organization. 

It has been argued by David Koh, Chief Executive of the Cybersecurity Commissioner, that the use of generative artificial intelligence brings to the table a whole new aspect of cyber threats, arguing that attackers will be able to match the increased sophistication and accessibility with their own capabilities. 

Having said that, the World Economic Forum's Global Cybersecurity Outlook 2026 is aligned closely with this assessment, whose goal is to redefine cybersecurity as a structural condition of the global digital economy, rather than treating it as a technical or business risk. According to the report, cyber risk is the result of convergence of forces, including artificial intelligence, geopolitical tensions, and the rapid rise of cyber-enabled financial crime. 

A study conducted by the Dublin Institute for Security Studies suggests that one of the greatest challenges facing organizations is not the emergence of new threats but rather the growing inadequacy of existing business models related to security and governance. 

Despite the WEF's assessment that the most consequential factor shaping cyber risk is the rise of artificial intelligence, more than 94 percent of senior leaders believe that they can adequately manage the risks associated with AI across their organizations. However, fewer than half indicate that they feel confident in their ability to manage these risks.

According to industry analysts, including fraud and identity specialists, this gap underscores a larger concern that artificial intelligence is making scams more authentic and scaleable through automation and mass targeting. These trends, taken together, indicate that organizations are experiencing a widening gap between the speed at which cyber threats are evolving and their ability to identify, respond, and govern them effectively as a result. 

Tanium offers one example of how the transition from tool-centered security to outcome-driven models is taking shape in practice, reflecting a broader shift from tool-centric security back to outcomes-driven security. This change in approach exemplifies a growing trend of security vendors seeking to translate these principles into operational reality. 

In addition to proposing autonomy as a wholesale replacement for established processes, the company has also emphasized the use of real-time endpoint intelligence and agentic AI as a method of guiding and supporting decision-making within existing operational workflows in order to inform and support decision-making. 

The objective is not to promote a fully autonomous system, but rather to provide organizations with the option of deciding at what pace they are ready to adopt automation. Despite Tanium leadership's assertion that autonomous IT is an incremental journey, one involving deliberate choices regarding human involvement, governance, and control, it remains an incremental journey. 

The majority of companies begin by allowing systems to recommend actions that are manually reviewed and approved, before gradually permitting automated execution within clearly defined parameters as they build confidence in their systems. 

Generally, this measured approach represents a wider understanding of the industry that autonomous systems scale best when they are integrated directly into familiar platforms, like service management and incident response systems, rather than being added separately as a layer. 

Vendors are hoping that by integrating live endpoint intelligence into tools like ServiceNow, security teams can shorten response times without requiring them to reorganize their operations. In essence, this change is a recognition that enterprise security is about more than eliminating complexity; it's about managing it without exhausting the people who need to guard increasingly dynamic environments. 

In order to achieve effective autonomy, humans need not be removed from the loop, but rather effort needs to be redistributed. It has been observed that computers are better suited for continuous monitoring, correlation, and execution at scale, while humans are better suited for judgment, strategic decision-making, and exceptional cases, when humans are necessary. 

There is some concern that this transition will not be defined by a single technological breakthrough but rather by the gradual building up of trust in automated decisions. It is essential for security leaders to recognize that success lies in creating resilient systems that are able to keep up with the ever-evolving threat landscape and not pursuing the latest innovation for its own sake. 

Taking a closer look ahead, organizations are going to realize that their future depends less on acquiring the next breakthrough technology, but rather on reshaping how cyber risk is managed and absorbed by the organization. In order for security strategies to be effective in a real-world environment where speed, adaptability, and resilience are as important as detection, they must evolve.

Cybersecurity should be elevated from an operational concern to a board-level discipline, risk ownership should be aligned to business decision-making, and architectures that prioritize real-time visibility and automated processes must be prioritized. 

Furthermore, organizations will need to put more emphasis on workforce sustainability, and make sure that human talent is put to the best use where it can be applied rather than being consumed by routine triage. 

As autonomy expands, both vendors and enterprises will need to demonstrate that they have the technical capability they require, as well as that they are transparent, accountable, and in control of their business. 

Despite the fact that AI has shaped the environment, geopolitics has shaped economic crime, and economic crime is on the rise, the strongest security programs will be those that combine technological leverage with disciplinary governance and earned trust. 

It is no longer simply necessary to stop attacks, but rather to build systems and teams capable of responding decisively in a manner that is consistent with the evolving threat landscape of today.

Google Rolls Out Gmail Address Change Feature

 

Google has rolled out a major update that will allow users to change their main @gmail.com address. This much-needed feature is being rolled out starting January 2026. Before this update, Gmail users were stuck with their original username for the entire life of the account, which resulted in users making new accounts so that they could have a fresh start. This update will resolve issues such as choosing the wrong or outdated email addresses set up by users or their families earlier in life. 

The feature makes the former address an alias, hence maintaining continuity without losing data. Emails sent to either the former or new addresses will still land in the same inbox, and all account information, including pictures, messages, and Drive files, will be maintained. Devices that were authenticated using the former address do not need to log out, as both addresses are associated with the same Google account for services such as YouTube, Maps, and Play Store. 

The feature can be accessed through myaccount.google.com/google-account-email. The steps include Personal info > Email > Google Account email, and then choose the change option when it is available. The users will enter a new available username, and then the steps are completed through email confirmation. If the option is not available, then the rollout has not yet been implemented in that account, and initial reports came from Hindi support pages, indicating a global rollout. 

It has built-in protections against abuse, such that when you change your address, you cannot change or modify it again for a period of one year. Yet even when you have changed your address, your old pseudonym is available for all life if you want to log back into your accounts or send emails, but you cannot keep alternating names all the time. 

For the cybersecurity expert and the creator, the upgrade enhances their level of privacy because it eliminates old handles that are linked to a personal history and does not transfer any data. It is also a good improvement because it eliminates the risks associated with old emails that appear on breaches. As 2026 progresses and the feature is fully deployed, one should monitor the support pages provided by Google.

Google Appears to Be Preparing Gemini Integration for Chrome on Android

 

Google appears to be testing a new feature that could significantly change how users browse the web on mobile devices. The company is reportedly experimenting with integrating its AI model, Gemini, directly into Chrome for Android, enabling advanced agentic browsing capabilities within the mobile browser.

The development was first highlighted by Leo on X, who shared that Google has begun testing Gemini integration alongside agentic features in Chrome’s Android version. These findings are based on newly discovered references within Chromium, the open-source codebase that forms the foundation of the Chrome browser.

Additional insight comes from a Chromium post, where a Google engineer explained the recent increase in Chrome’s binary size. According to the engineer, "Binary size is increased because this change brings in a lot of code to support Chrome Glic, which will be enabled in Chrome Android in the near future," suggesting that the infrastructure needed for Gemini support is already being added. For those unfamiliar, “Glic” is the internal codename used by Google for Gemini within Chrome.

While the references do not reveal exactly how Gemini will function inside Chrome for Android, they strongly indicate that Google is actively preparing the feature. The integration could mirror the experience offered by Microsoft Copilot in Edge for Android. In such a setup, users might see a floating Gemini button that allows them to summarize webpages, ask follow-up questions, or request contextual insights without leaving the browser.

On desktop platforms, Gemini in Chrome already offers similar functionality by using the content of open tabs to provide contextual assistance. This includes summarizing articles, comparing information across multiple pages, and helping users quickly understand complex topics. However, Gemini’s desktop integration is still not widely available. Users who do have access can launch it using Alt + G on Windows or Ctrl + G on macOS.

The potential arrival of Gemini in Chrome for Android could make AI-powered browsing more accessible to a wider audience, especially as mobile devices remain the primary way many users access the internet. Agentic capabilities could help automate common tasks such as researching topics, extracting key points from long articles, or navigating complex websites more efficiently.

At present, Google has not confirmed when Gemini will officially roll out to Chrome for Android. However, the appearance of multiple references in Chromium suggests that development is progressing steadily. With Google continuing to expand Gemini across its ecosystem, an official announcement regarding its availability on Android is expected in the near future.

AI Agent Integration Can Become a Problem in Workplace Operations


AI agents were considered harmless sometime ago. They did what they were supposed to do: write snippets of code, answer questions, and help users in doing things faster. 

Then business started expecting more.

Slowly, companies started using organizational agents over personal copilots- agents integrated into customer support, HR, IT, engineering, and operations. These agents didn't just suggest, but started acting- touching real systems, changing configurations, and moving real data:

  • A support agent that gets customer data from CRM, triggers backend fixes, updates tickets, and checks bills.
  • An HR agent who overlooks access throughout VPNs, IAM, SaaS apps.
  • A change management agent that processes requests, logs actions in ServiceNow, updates production configurations and Confluence.
  • These AI agents automate oversight and control, and have become core of companies’ operational infrastructure

Work of AI agents

Organizational agents are made to work across many resources, supporting various roles, multiple users, and workflows via a single implement. Instead of getting linked with an individual user, these business agents work as shared resources that cater to requests, and automate work of across systems for many users. 

To work effectively, the AI agents depend on shared accounts, OAuth grants, and API keys to verify with the systems for interaction. The credentials are long-term and managed centrally, enabling the agent to work continuously. 

Threat of AI agents in workplace 

While this approach maximizes convenience and coverage, these design choices can unintentionally create powerful access intermediaries that bypass traditional permission boundaries.

Although this strategy optimizes coverage and convenience, these design decisions may inadvertently provide strong access intermediaries that go beyond conventional permission constraints. The next actions may seem legitimate and harmless when agents inadvertently grant access outside the specific user's authority. 

Reliable detection and attribution are eliminated when the execution is attributed to the agent identity, losing the user context. Conventional security controls are not well suited for agent-mediated workflows because they are based on direct system access and human users. Permissions are enforced by IAM systems according to the user's identity, but when an AI agent performs an activity, authorization is assessed based on the agent's identity rather than the requester's.

The impact 

Therefore, user-level limitations are no longer in effect. By assigning behavior to the agent's identity and concealing who started the action and why, logging and audit trails exacerbate the issue. Security teams are unable to enforce least privilege, identify misuse, or accurately assign intent when using agents, which makes it possible for permission bypasses to happen without setting off conventional safeguards. Additionally, the absence of attribution slows incident response, complicates investigations, and makes it challenging to ascertain the scope or aim of a security occurrence.