Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

AI Coding Platform Orchids Exposed to Zero-Click Hack in BBC Security Test

 


A BBC journalist has demonstrated an unresolved cybersecurity weakness in an artificial intelligence coding platform that is rapidly gaining users.

The tool, called Orchids, belongs to a new category often referred to as “vibe-coding.” These services allow individuals without programming training to create software by describing what they want in plain language. The system then writes and executes the code automatically. In recent months, platforms like this have surged in popularity and are frequently presented as examples of how AI could reshape professional work by making development faster and cheaper.

Yet the same automation that makes these tools attractive may also introduce new forms of exposure.

Orchids states that it has around one million users and says major technology companies such as Google, Uber, and Amazon use its services. It has also received strong ratings from software review groups, including App Bench. The company is headquartered in San Francisco, was founded in 2025, and publicly lists a team of fewer than ten employees. The BBC said it contacted the firm multiple times for comment but did not receive a response before publication.

The vulnerability was demonstrated by cybersecurity researcher Etizaz Mohsin, who has previously uncovered software flaws, including issues connected to surveillance tools such as Pegasus. Mohsin said he discovered the weakness in December 2025 while experimenting with AI-assisted coding. He reported attempting to alert Orchids through email, LinkedIn, and Discord over several weeks. According to the BBC, the company later replied that the warnings may have been overlooked due to a high volume of incoming messages.

To test the flaw, a BBC reporter installed the Orchids desktop application on a spare laptop and asked it to generate a simple computer game modeled on a news website. As the AI produced thousands of lines of code on screen, Mohsin exploited a security gap that allowed him to access the project remotely. He was able to view and modify the code without the journalist’s knowledge.

At one point, he inserted a short hidden instruction into the project. Soon after, a text file appeared on the reporter’s desktop stating that the system had been breached, and the device’s wallpaper changed to an image depicting an AI-themed hacker. The experiment showed that an outsider could potentially gain control of a machine running the software.

Such access could allow an attacker to install malicious programs, extract private corporate or financial information, review browsing activity, or activate cameras and microphones. Unlike many common cyberattacks, this method did not require the victim to click a link, download a file, or enter login details. Security professionals refer to this technique as a zero-click attack.

Mohsin said the rise of AI-driven coding assistants represents a shift in how software is built and managed, creating new categories of technical risk. He added that delegating broad system permissions to AI agents carries consequences that are not yet fully understood.

Although Mohsin said he has not identified the same flaw in other AI coding tools such as Claude Code, Cursor, Windsurf, or Lovable, cybersecurity academics urge caution. Kevin Curran, a professor at Ulster University, noted that software created without structured review and documentation may be more vulnerable under attack.

The discussion extends beyond coding platforms. AI agents designed to perform tasks directly on a user’s device are becoming more common. One recent example is Clawbot, also known as Moltbot or Open Claw, which can send messages or manage calendars with minimal human input and has reportedly been downloaded widely.

Karolis Arbaciauskas, head of product at NordPass, warned that granting such systems unrestricted access to personal devices can expose users to serious risks. He advised running experimental AI tools on separate machines and using temporary accounts to limit potential damage.

Google Observes Threat Actors Deploying AI During Live Network Breaches


 

As synthetic intelligence has become a staple in modern organizations, the field has transformed how they analyze data, make automated decisions, and defend their digital perimeters, moving from experimental labs to the operational bloodstream. However, with the incorporation of these systems deeper into company infrastructure, the technology itself is becoming both a strategic asset and a desirable target for companies. 

Adversaries seeking leverage are now studying, imitating, and in some cases quietly manipulating the same models used to draft code, triage alerts, and streamline workflows. As Fast Company points out, this dual reality is redefining cyber risk, putting AI at the heart of both defense strategy and offensive innovation. 

Insights from Google Cloud's AI Threat Tracker indicate that this shift is accelerating rapidly. There has been a significant increase in model extraction attempts, or "distillation" attempts, which are attempts by attackers to systematically query proprietary artificial intelligence systems to estimate their underlying capabilities, without ever breaching a network in its traditional sense, according to the report. 

Google Threat Intelligence observes that state-aligned and financially motivated actors affiliated with China, Iran, North Korea, and Russia are integrating artificial intelligence tools into nearly every stage of the intrusion lifecycle. 

A growing number of these campaigns include automated reconnaissance, vulnerability mapping, and highly tailored social engineering, which can be carried out with minimal direct human intervention and are increasingly modular, scalable, and effective. 

In accordance with these findings, a newly released assessment by Google Threat Intelligence Group indicates a more operational phase of the threat landscape has begun. This analysis warns that adversaries are no longer considering artificial intelligence a peripheral experiment, but are instead embedding it directly into live attack workflows.

In particular, the targeting and misuse of Gemini models is highlighted, reflecting a broader trend in which commercially available generative systems are systematically evaluated, stressed, and sometimes incorporated into malicious toolchains. 

Researchers documented instances in which active malware strains initiated direct calls to Gemini during runtime through the application programming interface. In the absence of hard-coding all functional components within the malware binary, operators dynamically requested task-specific source code as the intrusion progressed from the model.

As part of the HONESTCUE malware family, structured prompts were issued to obtain C# code snippets that were subsequently executed within its attack chain. By externalizing portions of its logic, the malware was able to reduce its static footprint and complicate detection strategies that utilize signature matching or behavioral heuristics. 

Further, the report describes sustained efforts to perform model extraction attacks, also known as distillation attacks. These operations involved the generation of large volumes of carefully sequenced queries that mapped response patterns and approximated internal decision boundaries by threat actors. 

A key objective of adversaries is to replicate certain aspects of proprietary model performance through iterative analysis, so that they can train substitute systems without being required to bear the entire cost and workload associated with the development of a large-scale model. 

A Google representative has reported that multiple campaigns characterized by abnormal prompt velocity and structured probing activities intended to harvest Gemini's underlying capabilities have been identified and disrupted. This underscores the importance of safeguards which address not only data exfiltration, but also model intelligence protection as well. 

According to CrowdStrike, parallel intelligence strengthens our assessment that artificial intelligence integration is materially slowing down the tempo of modern intrusions. According to the investigators, adversaries are generating single-line commands for reconnaissance, credential harvesting, and data staging on compromised hosts by executing large language models in real time on compromised hosts. This effectively shifts tactical decision-making to on-demand AI systems. 

Metrics indicate that the firm's operational acceleration in 2025 has resulted in an average “breakout time” of eCrime, or the interval between initial access and lateral movement towards high-value assets, dropping to 29 minutes, with the fastest observed transition occurring within 27 seconds.

It was documented that the LAMEHUG malware utilized an external LLM via Hugging Face API to generate dynamic commands for enumerating hardware profiles, processes, services, network configurations and Active Directory domain data based upon minimal embedded prompts. Through outsourcing reconnaissance logic to a model, operators reduced the need for pre-compiled modules, enabling rapid adaptation without modifying the underlying binary. 

A single threat actor can pivot interactively by issuing contextualized instructions that are responsive to the environment in real time as a consequence of this architectural choice. There has been a continued focus on the technology sector, emphasizing its concentration of privileged access paths and its systemic significance throughout the supply chain. 

In addition, CrowdStrike noted that artificial intelligence is extending across multiple phases of the intrusion lifecycle. The number of incidents involving fake CAPTCHA lures grew by 563 percent in 2025 when compared with 2024, indicating the use of generative systems in social engineering. Some moderately resourced groups, such as Punk Spider, have been observed utilizing Gemini and DeepSeek to develop scripts designed to extract credentials from backup archives, terminate defensive services, and erase forensic evidence. 

Scripting that makes use of artificial intelligence (AI) narrows the capability gap between mid-tier criminal operators and highly-trained red teams, enabling coordinated attack chains which combine identity abuse, backup compromise, and domain escalation within a single attack chain. 

Separately, adversaries distributed malicious npm packages that instructed malicious AI command-line tools to generate commands for exfiltrating authentication material and cryptoassets. The incident responders reported the discovery of over 90 environments executing this adversary-developed AI workflow, indicating a trend toward threat actors delegating core post-exploitation functions to intelligent agents within enterprise networks. Model-driven approaches are also being implemented by state-aligned groups.

The Russian-linked collective FANCY BEAR deployed LAMEHUG against Ukrainian government entities, embedding prompts that instructed the model to copy Office documents and PDF documents, gather domain intelligence, and stage system data into text files for exfiltration by embedding prompts into the model. 

Underground forums reflect this operational shift. ChatGPT references outnumbered any other model by a significant margin by 2025, a development attributed less to technical preference than to the platform's widespread recognition and accessibility. This campaign illustrates how quickly reconnaissance, targeting, and staging can be automated once a model has been incorporated within an intrusion toolchain, despite the fact that LLM-enabled malware has not yet been proven more effective than traditional tools. 

It appears that AI will serve as a force multiplier, reducing operating friction and compressing timelines as well as reshaping expectations surrounding attacker speed and adaptability in the near future. 

Furthermore, Google announced that it worked with industry partners to dismantle an infrastructure associated with a suspected China-nexus espionage actor trackable as UNC2814 to emphasize the convergence of cloud platforms and covert command infrastructure. 

Approximately 53 organizations within 42 countries have been compromised as a result of the group's penetration, according to findings published by Google Threat Intelligence Group and Mandiant, with additional suspected intrusions in 20 other countries suspected. It is reported that the actor has maintained access to international government entities and global telecommunications providers across Africa, Asia, and the Americas for an extended period of time since at least 2017.

The investigators observed that the group utilized API calls to legitimate software as a service applications as a command-and-control strategy, intentionally intermixing malicious traffic with routine cloud communication. This operation is supported by the use of a C-based backdoor referred to as GRIDTIDE, which exploits the Google Sheets API for covert communication. 

The malware implements a polling mechanism by embedding command logic within spreadsheet cells, thereby retrieving attacker instructions and returning execution status codes from cell A1. A pair of adjacent cells facilitate bidirectional data transmission, including command output and file exfiltration staging. A second cell stores the compromised host's system metadata. This design facilitates remote data transfer and data tasking while concealing C2 exchanges in otherwise benign API activity. 

Although GRIDTIDE was identified in multiple environments, researchers were unable to definitively determine if every intrusion was based on the same payload. The initial access vectors are currently being investigated; however, UNC2814 has historically exploited vulnerable web servers and edge devices to gain access. 

As part of the post-compromise activity, service accounts were used to move laterally via SSH, living-off-the-land binaries were extensively used for reconnaissance and privilege escalation, as well as persistence through an embedded systemd service, deployed at /etc/systemd/system/xapt.service, which activated a new malware instance from /usr/sbin/xapt once activated.

The campaign also included the deployment of SoftEther VPN Bridge to create outbound encrypted tunnels to external infrastructure, which has previously been associated with multiple China-linked threat clusters. 

Based on forensic analysis, GRIDTIDE appears to have been selectively deployed on endpoints containing personally identifiable information in order to obtain intelligence on specific individuals or entities. Google reported that no confirmed evidence of data exfiltration occurred during the observed activity window. 

The remediation measures included terminating attacker-controlled Google Cloud projects, disabling UNC2814 infrastructure, robbing access to compromised accounts, and blocking the misuse of Google Sheets API endpoints utilized for C2 operations as part of Google's remediation measures. 

An official notification was sent to affected organizations and direct incident response support was provided to confirmed victims following the launch of this campaign, described as one among the most extensive and strategic campaigns that the company has encountered in recent years. All together, these disclosures indicate that artificial intelligence will become embedded in enterprise workflows with the same rigor as privileged infrastructure. 

As AI models, APIs, and service accounts become more integrated into enterprise workflows, they will need to be governed with the same level of rigorousness as privileged infrastructure. Security leaders should ensure that these assets are treated with strict access controls, anomaly detection, and continuous logging as high-value assets.

Increasing the effectiveness of threat hunting programs must include monitoring for abnormal prompt velocity, unusual API polling patterns, and model-driven command execution. As part of this effort, organizations should evaluate identity hygiene, restrict outbound connectivity from sensitive workloads, and harden edge systems that serve as the initial point of entry for hackers. 

An adversary who attempts to blend malicious traffic with legitimate SaaS communications can be contained with cloud-native telemetry, behavioral analytics, and zero-trust segmentation. The development of defensive strategies must therefore proceed parallel to the operationalization of artificial intelligence across reconnaissance, lateral movement, and persistence, with a particular focus on the security of models, the integrity of supply chains, and the coordination of rapid response activities. 

A clear lesson has emerged: Artificial intelligence is no longer peripheral to cyber security risk, but has become integral to both the threat model and the defense architecture designed to counteract it.

GitHub Fixes AI Flaw That Could Have Exposed Private Repository Tokens

 



A now-patched security weakness in GitHub Codespaces revealed how artificial intelligence tools embedded in developer environments can be manipulated to expose sensitive credentials. The issue, discovered by cloud security firm Orca Security and named RoguePilot, involved GitHub Copilot, the AI coding assistant integrated into Codespaces. The flaw was responsibly disclosed and later fixed by Microsoft, which owns GitHub.

According to researchers, the attack could begin with a malicious GitHub issue. An attacker could insert concealed instructions within the issue description, specifically crafted to influence Copilot rather than a human reader. When a developer launched a Codespace directly from that issue, Copilot automatically processed the issue text as contextual input. This created an opportunity for hidden instructions to silently control the AI agent operating within the development environment.

Security experts classify this method as indirect or passive prompt injection. In such attacks, harmful instructions are embedded inside content that a large language model later interprets. Because the model treats that content as legitimate context, it may generate unintended responses or perform actions aligned with the attacker’s objective.

Researchers also described RoguePilot as a form of AI-mediated supply chain attack. Instead of exploiting external software libraries, the attacker leverages the AI system integrated into the workflow. GitHub allows Codespaces to be launched from repositories, commits, pull requests, templates, and issues. The exposure occurred specifically when a Codespace was opened from an issue, since Copilot automatically received the issue description as part of its prompt.

The manipulation could be hidden using HTML comment tags, which are invisible in rendered content but still readable by automated systems. Within those hidden segments, an attacker could instruct Copilot to extract the repository’s GITHUB_TOKEN, a credential that provides elevated permissions. In one demonstrated scenario, Copilot could be influenced to check out a specially prepared pull request containing a symbolic link to an internal file. Through techniques such as referencing a remote JSON schema, the AI assistant could read that internal file and transmit the privileged token to an external server.

The RoguePilot disclosure comes amid broader concerns about AI model alignment. Separate research from Microsoft examined a reinforcement learning method called Group Relative Policy Optimization, or GRPO. While typically used to fine-tune large language models after deployment, researchers found it could also weaken safety safeguards, a process they labeled GRP-Obliteration. Notably, training on even a single mildly problematic prompt was enough to make multiple language models more permissive across harmful categories they had never explicitly encountered.

Additional findings stress upon side-channel risks tied to speculative decoding, an optimization technique that allows models to generate multiple candidate tokens simultaneously to improve speed. Researchers found this process could potentially reveal conversation topics or identify user queries with significant accuracy.

Further concerns were raised by AI security firm HiddenLayer, which documented a technique called ShadowLogic. When applied to agent-based systems, the concept evolves into Agentic ShadowLogic. This approach involves embedding backdoors at the computational graph level of a model, enabling silent modification of tool calls. An attacker could intercept and reroute requests through infrastructure under their control, monitor internal endpoints, and log data flows without disrupting normal user experience.

Meanwhile, Neural Trust demonstrated an image-based jailbreak method known as Semantic Chaining. This attack exploits limited reasoning depth in image-generation models by guiding them through a sequence of individually harmless edits that gradually produce restricted or offensive content. Because each step appears safe in isolation, safety systems may fail to detect the evolving harmful intent.

Researchers have also introduced the term Promptware to describe a new category of malicious inputs designed to function like malware. Instead of exploiting traditional code vulnerabilities, promptware manipulates large language models during inference to carry out stages of a cyberattack lifecycle, including reconnaissance, privilege escalation, persistence, command-and-control communication, lateral movement, and data exfiltration.

Collectively, these findings demonstrate that AI systems embedded in development platforms are becoming a new attack surface. As organizations increasingly rely on intelligent automation, safeguarding the interaction between user input, AI interpretation, and system permissions is critical to preventing misuse within trusted workflows.

New IT Rules Mandate Three Hour Deadline for Deepfake Takedowns


For the first time in India's digital governance landscape, the Union government has formally placed artificial intelligence-generated content within an enforceable regulatory framework, including deepfake videos, synthetic audio fabrications, and digitally altered visuals.

It has been announced through a Gazette Notification number G.S.R. 120(E), signed by Joint Secretary Ajit Kumar, that the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, will come into force on February 20, 2026. Despite its perceived fringe status, manipulated media is now recognized as a mainstream threat capable of distorting public discourse, reputations, and democratic processes as a mainstream issue. 

Government officials have drawn a sharper regulatory boundary around a rapidly expanding digital grey zone by tightening the obligations of intermediaries and defining accountability around artificial intelligence-driven deception. Considering the rapid proliferation of synthetic media across digital platforms, the notification provides a calibrated regulatory response. 

Through the incorporation of artificial intelligence-manipulated content into the Information Technology framework compliance architecture, the amendment clarifies intermediary liability, strengthens due diligence requirements, and narrows interpretive ambiguities associated with deepfake enforcement previously.

Essentially, algorithmically generated impersonations, voice clonings, and audiovisual material will no longer be considered peripheral anomalies, but rather regulated digital artefacts requiring legislative oversight. According to the revised rules, intermediaries are required to demonstrate mechanisms for detecting, expediting removal, and resolving user grievances involving deceptive or impersonative synthetic content. 

These requirements are intended to impose a defined compliance burden on intermediaries. In addition, the amendment recognizes that generative artificial intelligence systems have significantly reduced the threshold for large-scale misinformation, reputational manipulation, and misuse of identities. The government has done so by transitioning from advisory posture to enforceable mandate, enforcing the principle that technological innovations are not independent of regulatory responsibility while also incorporating AI-era content risks within India's formal digital compliance regime. 

In addition to expanding the regulatory scope, the 2026 amendment substantially adjusts the obligations of intermediaries concerning compliance with synthetically generated information and unlawful digital content, particularly in light of the expanded regulatory scope. Its effective date is February 20, 2026, and the revised framework amends the 2021 Rules by emphasizing enforceability, platform accountability, and informed user participation. 

In accordance with modified Rule 3(1)(c), intermediaries will now need to issue user advisories every three months, replacing an earlier annual disclosure, and explicitly stating what the consequences are for violating platform terms of service, privacy policies, or user agreements. Those users should be aware that non-compliance may result in suspensions or terminations of their access rights, as well as the potential for liability under applicable laws.

In addition to establishing mandatory reporting obligations in cases of cognizable offences, including those governed by the Protection of Children from Sexual Offences Act and the Bharatiya Nagarik Suraksha Sanhita, the amendment reinforces the integration of platform governance with criminal law enforcement mechanisms. However, the most significant procedural change relates to the compression of response timelines. 

There is now a significant reduction in the compliance window for takedown requests ordered by courts or law enforcement agencies from the previous 36-hour period. As a consequence, the removal time for nonconsensual intimate imagery has been reduced from 24 hours to two, and grievance redress mechanisms must resolve user complaints within seven days, effectively halving the previous deadline. 

To achieve compliance with these accelerated mandates, continuous monitoring frameworks need to be institutionalized, advanced automated detection systems must be deployed, and dedicated rapid-response compliance units need to be established that operate round-the-clock. 

A time-bound enforcement model replaces a comparatively lengthy procedural structure in the amendment to strengthen real-time coordination with law enforcement authorities and to limit the viral propagation of deepfakes and other forms of unlawful digital content before irreversible harm occurs. 

An initial draft framework was circulated by the Ministry of Electronics and Information Technology for stakeholder consultation in October 2025. This process was initiated as a result of the occurrence of several incidents that involved artificial intelligence-generated videos and voice recordings that falsely portrayed private individuals and public officials. 

In the period of elections and periods of social sensitivity, the proliferation of deepfake pornography, impersonation-based financial fraud, and misleading audiovisual clips has increased regulatory scrutiny. As well as reputational injury, concerns also encompass electoral integrity, public order, and the systematic amplification of misinformation within digital ecosystems that have a high rate of speed. 

While narrowing the definitional breadth while sharpening enforceability, the final notification clarifies the draft. The consultation version had characterized synthetically generated information in a broad sense, covering any content that is artificially or algorithmically constructed, modified, or altered. 

However, the notified rules place greater emphasis on material that misrepresents people, documents, or real-world events in a manner that is likely to be misleading. With this calibrated shift, interpretive overreach is reduced, while the compliance trigger is aligned with demonstrable harm and deceptive intent. 

In addition, the compliance architecture has been substantially strengthened. As a result of the amendment, intermediaries must disable access to flagged content within three hours of receiving a lawful government or court directive, reinforcing the accelerated enforcement regime. Further, the rules impose affirmative technical obligations on intermediaries that facilitate the creation or distribution of synthetic content.

Not only has this reduced the timeline for user grievances, but it also underscores a broader policy focus on real-time remediation. It is imperative that platforms employ reasonable technological safeguards to prevent the distribution of unlawful material, such as content regarding child sexual abuse, non-consensual intimate images, falsified electronic records, material relating to prohibited weapons and explosives, or depictions that mislead the public. 

The law requires intermediaries to include clear labels and embed durable provenance markers - such as permanent metadata or unique identifiers - that cannot be removed or suppressed by the end user in cases where synthetic content is not illegal per se. 

A significant social media intermediary should also require users to declare if uploaded material is synthetically generated, implement technical verification mechanisms to verify such declarations, and prominently label confirmed synthetic content before publication in order to validate such declarations. 

According to the notification, an intermediary that allows, promotes, or fails to act upon prohibited synthetic content in violation of these rules is deemed to have failed the statutory due diligence standard. Platforms must also inform users of the potential criminal liability, account suspension, and content removal implications of violations periodically.

The misuse of synthetic media may be subject to penalties under several legislation, such as the Bharatiya Nyaya Sanhita Act, the Protection of Children from Sexual Offences Act, and the Representation of the People Act. 

The amendment formally updates statutory references by replacing provisions of the Indian Penal Code with those of the Bharatiya Nyaya Sanhita, 2023, which is issued under Section 87 of the Information Technology Act. This results in the harmonisation of India's digital regulatory framework with a restructured criminal law system. 

Together, the amendments represent a broader process of recalibration of India's digital regulatory framework in response to the structural risks posed by generative technologies. The framework provides a more concise compliance roadmap and sharper enforcement triggers, however, its effectiveness will ultimately depend on consistency in implementation, technical readiness within intermediary ecosystems, and a coordinated approach between regulators, law enforcement agencies, and platform operators. 

According to legal observers, it is essential to invest consistently in forensic capability, algorithmic transparency, and institutional capacity if we are to prevent both regulatory overreach and underenforcement during the transition from policy intent to operational stability. 

By embracing synthetic media governance as a core platform architecture rather than merely treating it as an adjunct moderation function, intermediaries are signaling the need to reframe their approach to synthetic media governance. This reinforces the parallel responsibility of users and digital stakeholders to exercise discernment when consuming and disseminating artificial intelligence-generated content.

It is likely that the durability of this framework will depend not only on the statutory text, but also on an adaptive oversight process, technological innovation, and a digital citizenry prepared to navigate an increasingly mediated information environment as synthetic content technologies continue to evolve.

How Poorly Secured Endpoints Are Expanding Risk in LLM Infrastructure

 


As organizations build and host their own Large Language Models, they also create a network of supporting services and APIs to keep those systems running. The growing danger does not usually originate from the model’s intelligence itself, but from the technical framework that delivers, connects, and automates it. Every new interface added to support an LLM expands the number of possible entry points into the system. During rapid rollouts, these interfaces are often trusted automatically and reviewed later, if at all.

When these access points are given excessive permissions or rely on long-lasting credentials, they can open doors far wider than intended. A single poorly secured endpoint can provide access to internal systems, service identities, and sensitive data tied to LLM operations. For that reason, managing privileges at the endpoint level is becoming a central security requirement.

In practical terms, an endpoint is any digital doorway that allows a user, application, or service to communicate with a model. This includes APIs that receive prompts and return generated responses, administrative panels used to update or configure models, monitoring dashboards, and integration points that allow the model to interact with databases or external tools. Together, these interfaces determine how deeply the LLM is embedded within the broader technology ecosystem.

A major issue is that many of these interfaces are designed for experimentation or early deployment phases. They prioritize speed and functionality over hardened security controls. Over time, temporary testing configurations remain active, monitoring weakens, and permissions accumulate. In many deployments, the endpoint effectively becomes the security perimeter. Its authentication methods, secret management practices, and assigned privileges ultimately decide how far an intruder could move.

Exposure rarely stems from a single catastrophic mistake. Instead, it develops gradually. Internal APIs may be made publicly reachable to simplify integration and left unprotected. Access tokens or API keys may be embedded in code and never rotated. Teams may assume that internal networks are inherently secure, overlooking the fact that VPN access, misconfigurations, or compromised accounts can bridge that boundary. Cloud settings, including improperly configured gateways or firewall rules, can also unintentionally expose services to the internet.

These risks are amplified in LLM ecosystems because models are typically connected to multiple internal systems. If an attacker compromises one endpoint, they may gain indirect access to databases, automation tools, and cloud resources that already trust the model’s credentials. Unlike traditional APIs with narrow functions, LLM interfaces often support broad, automated workflows. This enables lateral movement at scale.

Threat actors can exploit prompts to extract confidential information the model can access. They may also misuse tool integrations to modify internal resources or trigger privileged operations. Even limited access can be dangerous if attackers manipulate input data in ways that influence the model to perform harmful actions indirectly.

Non-human identities intensify this exposure. Service accounts, machine credentials, and API keys allow models to function continuously without human intervention. For convenience, these identities are often granted broad permissions and rarely audited. If an endpoint tied to such credentials is breached, the attacker inherits trusted system-level access. Problems such as scattered secrets across configuration files, long-lived static credentials, excessive permissions, and a growing number of unmanaged service accounts increase both complexity and risk.

Mitigating these threats requires assuming that some endpoints will eventually be reached. Security strategies should focus on limiting impact. Access should follow strict least-privilege principles for both people and systems. Elevated rights should be granted only temporarily and revoked automatically. Sensitive sessions should be logged and reviewed. Credentials must be rotated regularly, and long-standing static secrets should be eliminated wherever possible.

Because LLM systems operate autonomously and at scale, traditional access models are no longer sufficient. Strong endpoint privilege governance, continuous verification, and reduced standing access are essential to protecting AI-driven infrastructure from escalating compromise.

Anthropic Launches Claude Code Security To Autonomously Detect And Patch Bugs

 

Anthropic has introduced Claude Code Security, a new AI-powered capability in its Claude Code assistant that promises to raise the bar for software security by scanning entire codebases for vulnerabilities and suggesting human-reviewed patches. The feature is currently rolling out in a limited research preview for Enterprise and Team customers, reflecting Anthropic’s cautious approach to deploying advanced cybersecurity tools. By positioning this as a defender-focused technology, the company aims to counter the same AI-driven techniques that attackers are starting to use to automate vulnerability discovery at scale.

Unlike traditional static analysis tools that rely on rule-based pattern matching and known vulnerability signatures, Claude Code Security analyzes code more like a human security researcher. It reasons about how different components interact, traces data flows through the application, and flags subtle issues that conventional scanners often miss. This deeper contextual understanding is designed to surface complex and high-severity bugs that may have remained hidden despite years of manual and automated review. 

Each issue identified by Claude Code Security goes through a multi-stage verification process intended to filter out false positives before results ever reach a security analyst. The system re-examines its own findings, attempts to prove or disprove them, and assigns both severity and confidence ratings so teams can prioritize the most critical fixes. All results are presented in a dedicated dashboard, where developers and security teams can inspect the affected code, review the suggested patches, and decide how to remediate. Anthropic emphasizes a human-in-the-loop model, ensuring that nothing is changed without explicit developer approval.

Claude Code Security builds on more than a year of research into Anthropic’s cybersecurity capabilities, including testing in capture-the-flag competitions and collaborations with partners such as Pacific Northwest National Laboratory. Using its latest Claude Opus 4.6 model, Anthropic reports that it has already uncovered more than 500 long-standing vulnerabilities in production open-source projects, many of which had survived decades of expert scrutiny. Those findings are now going through triage and responsible disclosure with maintainers, reinforcing the tool’s emphasis on real-world impact and careful rollout. 

Anthropic sees this launch as part of a broader shift in the cybersecurity landscape, where AI will routinely scan a significant share of the world’s code for flaws. The company warns that attackers will increasingly use similar models to find exploitable weaknesses faster than ever, but argues that defenders who move quickly can seize the same advantages to harden their systems in advance. By making Claude Code Security available first to enterprises, teams, and open-source maintainers, Anthropic is betting that AI-augmented defenders can keep pace with, and potentially outmaneuver, AI-empowered adversaries.

Cloudflare Launches Moltworker to Run Self-Hosted AI Agent Moltbot on Its Developer Platform

 

Cloudflare has unveiled Moltworker, an open-source framework designed to run Moltbot—a self-hosted personal AI agent—directly on its Developer Platform, eliminating the requirement for dedicated on-premise hardware. Moltbot, formerly known as Clawdbot, functions as a customizable personal assistant that operates within chat applications. It connects with AI models, web browsers, and third-party services while maintaining user control over data and workflows.

Moltworker modifies Moltbot to function within Cloudflare Workers by pairing an entrypoint Worker with isolated Sandbox containers. The Worker serves as the API routing and administrative interface, while Moltbot’s runtime and integrations execute inside secure Sandboxes. To overcome the temporary nature of containers, persistent data—such as conversation history and session information—is stored in Cloudflare R2.

The deployment takes advantage of recent improvements to Node.js compatibility within Cloudflare Workers. According to Cloudflare, enhanced native Node API support reduces reliance on workaround solutions and enables a wider range of npm packages to run without modification. Although Moltbot currently runs primarily inside containers, the company suggests that stronger compatibility could allow more agent logic to shift closer to the edge over time.

Moltworker also incorporates multiple Cloudflare services to mirror and expand upon the local Moltbot setup. AI traffic is routed through Cloudflare AI Gateway, which provides access to multiple model providers along with centralized monitoring and configuration tools. Browser automation is powered by Cloudflare Browser Rendering, enabling Moltbot to operate headless Chromium sessions for tasks such as page navigation, form submissions, and content extraction—without embedding a browser directly within the container. Access control for APIs and the administrative interface is secured through Cloudflare Zero Trust Access.

Early community feedback has been divided. Some users view the hosted model as a way to simplify deployment and encourage broader adoption. Commenting on the announcement, Peter Choi noted that running Moltbot on Cloudflare could significantly broaden adoption, but questioned whether the shift alters the project’s original appeal, which emphasized full local control.

Others emphasized operational convenience. One user wrote:I've been self-hosting on a VPS, which works fine, but managing the box is a chore. This looks like the 'set it and forget it' version. Curious how state persistence works across worker invocations.

Cloudflare has released Moltworker as an open-source project on GitHub and describes it as a proof of concept rather than a fully supported product. The company presents it as a demonstration of how its Developer Platform—integrating Workers, Sandboxes, AI Gateway, Browser Rendering, and storage services—can securely deploy and scale AI agents at the edge.


London Boroughs Struggle to Restore Services After November Cyber Attack




A cyber intrusion identified on November 24, 2025 has disrupted essential local authority services in two central London boroughs, freezing parts of the property market and delaying administrative functions.

The Royal Borough of Kensington and Chelsea and Westminster City Council have both been unable to operate several core systems since the breach was detected. Although Kensington and Chelsea is internationally associated with high-value homes, luxury retail outlets and tree-lined residential streets, routine civic operations in the borough are currently under strain.

A notice published on the Kensington and Chelsea council website states that disruption is expected to continue for several more weeks and that restoring all services may take months.

According to HM Land Registry figures, approximately 2,000 property transactions occur annually within Kensington and Chelsea. Many of those transactions are now impacted because the councils cannot conduct local authority searches. These searches are mandatory checks that examine planning history, land charges, infrastructure proposals and regulatory constraints linked to a property.

Nick Gregori, Head of Research at property data platform LonRes, explained that local authority searches are fundamental to the conveyancing process. Buyers relying on mortgage financing cannot secure loans without completed searches. Even purchasers using cash are advised to obtain them to ensure proper due diligence.

Jo Eccles, founder of buying agency Eccord, said two of her clients purchasing in Westminster have had to obtain indemnity insurance because official searches are not expected to resume until April due to accumulated delays. She noted that private banks are sometimes willing to proceed with indemnity-backed transactions, whereas retail lenders are generally less accommodating.

Robert Green, Head of Sales at John D Wood & Co. in Chelsea Green, stated that indemnity policies do not eliminate the need for careful investigation. Solicitors are attempting to reconstruct due diligence by reviewing historical documentation held by sellers or from previous acquisition files. Buyers without access to private lending or substantial liquidity are finding transactions extremely difficult to complete.

Planning services have also stalled. Architect Emily Ceraudo has two projects paused: one involving listed building consent in South Kensington and another concerning a mansard roof extension in Mayfair. She said clients initially struggled to accept that the entire planning system could remain offline for this duration, prompting her to share official correspondence confirming the cause of delay. Councils have indicated that some applications may be processed offline, but no revised timeframe has been provided.

There are reports of contractors reconsidering site activity and some clients contemplating proceeding with works in anticipation of retrospective approval.

Housing benefit payments were also interrupted. Laurence Turner, who rents a studio flat in Chelsea to an elderly tenant with medical needs, said he only became aware of the issue after two missed payments. He emphasized that he has no contractual relationship with the council and that his tenant had consistently paid rent early for five years. His letting agent, Maskells, contacted the council for clarification. Payments due in mid-December and mid-January were missed, leaving £2,870 outstanding before funds were eventually received.

Turner observed that council service charges were skipped once in mid-December but resumed in mid-January, whereas housing benefit was missed twice. He acknowledged that municipal financial systems are complex and that he may not see the full administrative context.

Neither borough has provided a definitive restoration date. Kensington and Chelsea stated that systems are being reactivated gradually under guidance from NCC Group, the Metropolitan Police and the National Cyber Security Centre. Property searches are expected to return as soon as possible, with a limited search service available before full restoration.

Council Leader Cllr Elizabeth Campbell described the incident as a n intricate criminal cyber attack. She said prior investment in digital, data and technology infrastructure, including updated cyber defence systems, helped reduce overall damage. She confirmed that the planning system is undergoing checks, that new planning applications cannot progress beyond validation, and that local land charge searches remain unavailable. She added that £10 million in housing benefits has been issued since the incident and that recovery work continues with specialist partners to ensure systems are restored safely and with strengthened resilience. 

India Sees Rising Push for Limits on Children’s Social Media Access

 

A growing conversation around restricting social media access for children under 16 is gaining traction across India, with several state leaders reviewing regulatory models adopted overseas — particularly in Australia.

Ministers from at least two southern states have indicated that they are assessing whether prohibiting minors from using social media could effectively shield children from excessive online exposure.

Adding weight to the debate, the latest Economic Survey — an annual report prepared by a team led by India’s chief economic adviser suggested that the central government explore age-based controls on children’s social media usage. While the survey does not mandate policy action, its recommendations often influence national discussions.

Australia’s Precedent Sparks Global Debate

Australia recently became the first nation to prohibit most social media platforms for users under 16. The law requires companies to verify users’ ages and deactivate accounts belonging to underage individuals.

The decision drew criticism from tech platforms. As Australia’s internet regulator told the BBC last month, companies responded to the framework "kicking and screaming - very very reluctantly".

Meanwhile, lawmakers in France have approved a bill in the lower house seeking to block social media access for children under 15; the proposal now awaits Senate approval. The United Kingdom is also evaluating similar measures.

In India, LSK Devarayalu of the Telugu Desam Party — which governs Andhra Pradesh and supports Prime Minister Narendra Modi’s federal coalition — introduced a private member’s bill proposing a ban on social media use for children under 16. Although such bills rarely become law, they can influence legislative debate.

Separately, the Andhra Pradesh government has formed a ministerial group to examine international regulatory models. It has also invited major technology firms, including Meta, X, Google and ShareChat, for consultations. The companies have yet to respond publicly.

State IT Minister Nara Lokesh recently wrote on X that children were "slipping into relentless usage" of social media, affecting their attention spans and academic performance.

"We will ensure social media becomes a safer space and reduce its damaging impact - especially for women and children," he added.

In Goa, Tourism and IT Minister Rohan Khaunte confirmed that authorities are studying whether such restrictions could be introduced, promising further details soon.

Similarly, Priyank Kharge, IT Minister of Karnataka — home to Bengaluru, often dubbed India’s Silicon Valley — informed the state assembly that discussions were underway on responsible artificial intelligence and social media use. He referenced a “digital detox” initiative launched in partnership with Meta, involving approximately 300,000 students and 100,000 teachers. However, he did not clarify whether legislative action was being considered.

Enforcement and Legal Hurdles

Experts caution that implementing such bans in India would be legally and technically complex.

Digital rights activist Nikhil Pahwa pointed out that enforcing state-level prohibitions could create jurisdictional conflicts. "While companies can infer users' locations through IP addresses, such systems are often inaccurate. Where state boundaries are very close, you can end up creating conflicts if one state bans social media use and another does not."

He also underscored the broader issue of age verification. "Age verification is not simple. To adhere to such bans, companies would effectively have to verify every individual using every service on the internet," Pahwa told the BBC.

Even in Australia, some minors reportedly bypass restrictions by entering false birth dates to create accounts.

According to Prateek Waghre, head of programmes at the Tech Global Institute, successful enforcement would hinge on platform cooperation.

"In theory, location can be inferred through IP addresses by internet service providers or technology companies, but whether the companies operating such apps would comply, or challenge such directions in court, is not yet clear," he says.

Broader Social Concerns

While lawmakers acknowledge the risks of excessive social media exposure, some analysts argue that a blanket ban may be too narrow a solution.

A recent survey of 1,277 Indian teenagers by a non-profit organisation found that many accounts are created with assistance from family members or friends and are often not tied to personal email addresses. This complicates assumptions of individual ownership central to age-verification systems.

Parents remain divided. Delhi resident Jitender Yadav, father of two young daughters, believes deeper issues are at play.

"Parents themselves fail to give enough time to children and hand them phones to keep them engaged - the problem starts there," he says.

"I am not sure if a social media ban will help. Because unless parents give enough time to their children or learn to keep them creatively engaged, they will always find ways to bypass such bans," he says.

As the discussion unfolds, India faces a complex balancing act — safeguarding children online while navigating legal, technological and social realities.

SMS and OTP Bombing Tools Evolve into Scalable, Global Abuse Infrastructure

 

The modern authentication ecosystem operates on a fragile premise: that one-time password requests are legitimate. That assumption is increasingly being challenged. What started in the early 2020s as loosely circulated scripts designed to annoy phone numbers has transformed into a coordinated ecosystem of SMS and OTP bombing tools built for scale, automation, and persistence.

New findings from Cyble Research and Intelligence Labs (CRIL) analyzed nearly 20 actively maintained repositories and found rapid technical progression continuing through late 2025 and into 2026. These tools have moved beyond basic terminal scripts. They now include cross-platform desktop applications, Telegram-integrated automation frameworks, and high-performance systems capable of launching large-scale SMS, OTP, and voice-bombing campaigns across multiple geographies.

Researchers emphasize that the study reflects patterns within a defined research sample and should be viewed as indicative trends rather than a full mapping of the global ecosystem. Even within that limited dataset, the scale and sophistication are significant

SMS and OTP bombing campaigns exploit legitimate authentication endpoints. Attackers repeatedly trigger password resets, registration verifications, or login challenges, overwhelming a victim’s phone with genuine SMS messages or automated voice calls. The result ranges from harassment and disruption to more serious risks such as MFA fatigue.

Across the 20 repositories examined, researchers identified approximately 843 vulnerable API endpoints. These endpoints belonged to organizations across telecommunications, financial services, e-commerce, ride-hailing services, and government platforms. The recurring weaknesses were predictable: inadequate rate limiting, weak or poorly enforced CAPTCHA mechanisms, or both.

Regional targeting was uneven. Roughly 61.68% of observed endpoints—about 520—were linked to infrastructure in Iran. India accounted for 16.96%, approximately 143 endpoints. Additional activity was concentrated in Turkey, Ukraine, and parts of Eastern Europe and South Asia.

The attack lifecycle typically begins with endpoint discovery. Threat actors manually test authentication workflows, probe common API paths such as /api/send-otp or /auth/send-code, reverse-engineer mobile applications to uncover hardcoded API references, or leverage community-maintained endpoint lists shared in public repositories and forums. Once identified, these endpoints are integrated into multi-threaded attack frameworks capable of issuing simultaneous requests at scale.

The technical sophistication of SMS and OTP bombing tools has advanced considerably. Maintainers now offer versions across seven programming languages and frameworks, lowering entry barriers for individuals with limited coding expertise.

Modern toolkits commonly include:
  • Multi-threading to enable parallel API exploitation
  • Proxy rotation to bypass IP-based defenses
  • Request randomization to mimic human behavior
  • Automated retry mechanisms and failure handling
  • Real-time activity dashboards
More concerning is the widespread use of SSL bypass techniques. Approximately 75% of the repositories analyzed disable SSL certificate validation. Instead of relying on properly verified secure connections, these tools deliberately ignore certificate errors, enabling traffic interception or manipulation without interruption. SSL bypass has emerged as one of the most frequently observed evasion strategies.

In addition, 58.3% of repositories randomize User-Agent headers to evade signature-based detection systems. Around 33% exploit static or hardcoded reCAPTCHA tokens, effectively bypassing poorly implemented bot protections.

The ecosystem has also expanded beyond SMS flooding. Voice-bombing capabilities—automated call floods triggered through telephony APIs—are now integrated into several frameworks, broadening the harassment surface.

Commercialization and Data Harvesting Risks

Alongside open-source development, a commercial layer has surfaced. Browser-based SMS and OTP bombing platforms now offer simplified, point-and-click interfaces. Often marketed misleadingly as “prank tools” or “SMS testing services,” these platforms eliminate technical setup requirements.

Unlike repository-based tools that require local execution and configuration, web-based services abstract proxy management, API integration, and automation processes. This significantly increases accessibility.

However, these services frequently operate on a dual-threat model. Phone numbers entered into such platforms are often harvested. The collected data may later be reused in spam campaigns, sold as lead lists, or integrated into broader fraud operations. In effect, users risk exposing both their targets and themselves to ongoing exploitation.

Financial, Operational, and Reputational Impact

For individuals, SMS and OTP bombing can severely disrupt device usability. Effects include degraded performance, overwhelmed message inboxes, exhausted SMS storage, battery drain, and increased risk of MFA fatigue—potentially leading to accidental approval of malicious login attempts. Voice-bombing campaigns further intensify the disruption.

For organizations, the consequences extend well beyond inconvenience.

Financially, each OTP message typically costs between $0.05 and $0.20. An attack generating 10,000 messages can result in expenses ranging from $500 to $2,000. Sustained abuse of exposed endpoints can drive monthly SMS costs into five-figure sums.

Operationally, legitimate users may be unable to receive verification codes, customer support volumes can surge, and authentication delays can impact service reliability. In regulated industries, failure to secure authentication workflows may introduce compliance risks.

Reputational damage compounds these issues. Users quickly associate spam-like behavior with weak security controls, eroding trust and confidence in affected organizations.

As SMS and OTP bombing tools continue to evolve in sophistication and accessibility, the strain on authentication infrastructure underscores the urgent need for stronger rate limiting, adaptive bot detection, and hardened API protections across industries

Tesla Slashes Car Line-Up to Double Down on Robots and AI

 

Tesla is cutting several car models and scaling back its electric vehicle ambitions as it shifts focus towards robotics and artificial intelligence, marking a major strategic turning point for the company. The move comes after Tesla reported its first annual revenue decline since becoming a major EV player, alongside a steep fall in profits that undercut its long-standing image as a hyper-growth automaker.Executives are now presenting AI-driven products, including autonomous driving systems and humanoid robots, as the company’s next big profit engines, even as demand for its vehicles shows signs of cooling in key markets.

According to the company, several underperforming or lower-margin models will be discontinued or phased out, allowing Tesla to concentrate resources on a smaller range of vehicles and on the software and AI platforms that power them. This rationalisation follows intense price competition in the global EV market, especially from Chinese manufacturers, which has squeezed margins and forced Tesla into repeated price cuts over the past year. While the company argues that a leaner line-up will improve efficiency and profitability, the decision raises questions about whether Tesla is stepping back from its once-stated goal of driving a mass-market EV revolution.

Elon Musk has increasingly projected Tesla as an AI and robotics firm rather than a traditional carmaker, highlighting projects such as its Optimus humanoid robot and advanced driver-assistance systems. In recent briefings, Musk and other executives have suggested that robotaxis and factory robots could ultimately generate more value than car sales, if Tesla can achieve reliable full self-driving and scale its robotics platforms. Investors, however, remain divided on whether these long-term bets justify the current volatility in Tesla’s core automotive business.

Analysts say the shift underscores broader turbulence in the EV sector, where slowing demand growth, higher borrowing costs and intensifying competition have forced companies to reassess expansion plans. Tesla’s retrenchment on vehicle models is being closely watched by rivals and regulators, as it may signal a maturing market in which software, AI capabilities and integrated ecosystems matter more than the sheer number of models on offer. At the same time, a pivot towards AI raises fresh scrutiny over safety, data practices and the real-world performance of autonomous systems.

For consumers, the immediate impact is likely to be fewer choices in Tesla’s showroom but potentially faster updates and improvements to the remaining models and their software features. Some owners may welcome the renewed focus on autonomy and smart features, while others could be frustrated if favoured variants are discontinued.As Tesla repositions itself, the company faces a delicate balancing act: reassuring car buyers and shareholders today while betting heavily that its AI and robotics vision will define its future tomorrow.

Paul McCartney’s Phone-Free Concert Sparks Growing Push to Lock Smartphones Away

 


When Sir Paul McCartney took the stage at the Santa Barbara Bowl, he promised fans a close, personal performance. He went a step further by introducing a strict no-phones policy, effectively creating a temporary “lockdown” on selfies and video recording.

All 4,500 attendees were required to place their mobile phones inside magnetically sealed pouches for the entire show, resulting in a completely phone-free concert experience.

"Nobody's got a phone," McCartney announced during his 25-song performance. "Really, it's better!" he added.

The process behind enforcing such a large-scale phone ban is relatively straightforward. As fans enter the venue, their phones are sealed inside special pouches that remain with them throughout the event. Once the show ends, the magnetic lock is released and devices are returned to normal use.

A growing number of artists have adopted similar policies. Performers including Dave Chappelle, Alicia Keys, Guns N' Roses, Childish Gambino and Jack White say phone-free environments help them deliver better performances and even take creative risks.

In a June interview with Rolling Stone, Sabrina Carpenter also spoke about the possibility of banning phones at future concerts. Many fans appear open to the idea.

Shannon Valdes, who attended a Lane8 DJ set, shared her experience online: "It was refreshing to be part of a crowd where everyone was fully present - dancing, connecting, and enjoying the best moments - rather than recording them."

The inspiration behind the pouch technology dates back to 2012, when Graham Dugoni witnessed a moment at a music festival that left a lasting impression.

"I saw a man drunk and dancing and a stranger filmed him and immediately posted it online," Dugoni explains. "It kind of shocked me.

"I wondered what the implications might be for him, but I also started questioning what our expectations of privacy should be in the modern world."

Within two years, the former professional footballer launched Yondr, a US-based start-up focused on creating phone-free spaces. While the lockable pouch industry is still developing, more companies are entering the market. These pouches are now commonly used in theatres, art galleries, and increasingly in schools.

Prices typically range from £7 to £30 per pouch, depending on order size and supplier. Yondr says it has partnered with around 2.2 million schools in the US, while roughly 250,000 students across 500 schools in England now use its pouches. One academy trust in Yorkshire reportedly spent £75,000 implementing the system.

Paul Nugent, founder of Hush Pouch, spent two decades installing school lockers before entering this space. He says school leaders must weigh several factors before adopting the technology.

"Yes it can seem an expensive way of keeping phones out of schools, and some people question why they can't just insist phones remain in a student's bag," he explains.

"But smartphones create anxiety, fixation, and FOMO - a fear of missing out. The only way to genuinely allow children to concentrate in lessons, and to enjoy break time, is to lock them away."

According to Dugoni, schools that have introduced phone-free policies have reported measurable benefits.

"There have been notable improvements in academic performance, and headteachers also report reductions in bullying," he explains.

Vale of York Academy introduced pouches in November. Headteacher Gillian Mills told the BBC: "It's given us an extra level of confidence that students aren't having their learning interrupted.

"We're not seeing phone confiscations now, which took up time, or the arguments about handing phones over, but also teachers are saying that they are able to teach."

The political debate around smartphones in schools is also intensifying. Conservative leader Kemi Badenoch has said her party would push for a complete ban on smartphones in schools if elected. The Labour government has stopped short of a nationwide ban, instead allowing headteachers to decide, while opening a consultation on restricting social media access for under-16s.

As part of these measures, Ofsted will be granted powers to review phone-use policies, with ministers expecting schools to become “phone-free by default”.

Nugent notes that many parents prefer their children to carry phones for safety reasons during travel.

"The first week or so after we install the system is a nightmare," he adds. "Kids refuse, or try and break the pouches open. But once they realise no-one else has a phone, most of them embrace it as a kind of freedom."

The rapid expansion of social media platforms and AI-driven content places these phone-free initiatives in direct opposition to tech companies whose algorithms encourage constant smartphone use. Still, Nugent believes public sentiment is shifting.

"We're getting so many enquiries now. People want to ban phones at weddings, in theatres, and even on film sets," he says.

"Effectively carrying a computer around in your hand has many benefits, but smartphones also open us up to a lot of misdirection and misinformation.

"Enforcing a break, especially for young people, has so many positives, not least for their mental health."

Dugoni agrees that society may be reaching a turning point.

"We're getting close to threatening the root of what makes us human, in terms of social interaction, critical thinking faculties, and developing the skills to operate in the modern world," he explains.

"If we continue to outsource those, with this crutch in our pocket at all times, there is a danger we end up undermining what it means to be a productive person.

"And that is a moment where it's worth pushing back and trying to understand where we go from here."

As 4,500 McCartney fans sang along to Hey Jude under a late-September sky, many may have felt the former Beatle’s message resonate just as strongly as the music.

Ukraine Increases Control Over Starlink Terminals


New Starlink verification system 

Ukraine has launched a new authentication system for Starlink satellite internet terminals used by the public and the military after verifying that Russia state sponsored hackers have started using the technology to attack drones. 

The government has also introduced a compulsory “whitelist” for Starlink terminals, where only authenticated and registered devices will work in Ukraine. All other terminals used will be removed, as per the statement from Mykhailo Fedorov, country's recently appointed defense chief. 

Why the new move?

Kyiv claims that Russian unmanned aerial vehicles are now being commanded in real time using Starlink links, making them more difficult to detect, jam, or shoot down. This action is intended to counteract these threats. "It is challenging to intercept Russian drones that are equipped with Starlink," Fedorov stated earlier this week. "They can be controlled by operators over long distances in real time, will not be affected by electronic warfare, and fly at low altitudes." The Ministry of Defense is implementing the whitelist in collaboration with SpaceX, the company that runs the constellation of low-Earth orbit satellites for Starlink.

The step is presently the only technological way to stop Russia from abusing the system, Fedorov revealed Wednesday, adding that citizens have already started registering their terminals. "The government has taken this forced action to save Ukrainian lives and safeguard our energy infrastructure," he stated. 

How will it impact other sectors?

Businesses will be able to validate devices online using Ukraine's e-government services, while citizens will be able to register their terminals at local government offices under the new system. According to Ukraine's Ministry of Defense, military units will be exempt from disclosing account information and will utilize a different secure registration method.

Using Starlink connectivity, Ukraine discovered a Russian drone operating over Ukrainian territory at the end of January. After then, Kyiv got in touch with SpaceX to resolve the problem, albeit the specifics of the emergency procedures were not made public. Army, a Ukrainian military outletSetting a maximum speed at which Starlink terminals can operate was one step, according to Inform, which cited an initial cap of about 75 kilometers per hour. According to the study, Russian strike drones usually fly faster than that, making it impossible for operators to manage them in real time.


YouTube's New GenAI Feature in Tools Coming Soon


Youtube is planning something new for its platform and content creators in 2026. The company plans to integrate AI into its existing and new tools. The CEO said that content creators will be able to use GenAI for shorts. While we don't know much about the feature yet, it looks like OpenAI’s Sora app where users make videos of themselves via prompt. 

What will be new in 2026? 

“This year you'll be able to create a Short using your own likeness, produce games with a simple text prompt, and experiment with music “ said CEO Neal Mohan. All these apps will be AI-powered which many creators may not like. Many users prefer non-AI content. CEO Neil Mohan has addressed these concerns and said that “throughout this evolution, AI will remain a tool for expression, not a replacement.”

But the CEO didn't provide other details about these new AI capabilities. It is not clear how this will help the creators and the music experimentation work. 

That's not all, though.

Additionally, YouTube will introduce new formats for shorts. According to Mohan, Shorts would let users to share images in the same way as Instagram Reels does. Direct sharing of these will occur on the subscribers' feed. 

In 2026, YouTube will likewise concentrate on the biggest displays it can be accessed on, which are televisions. According to Mohan, the business will soon introduce "more than 10 specialized YouTube TV plans spanning sports, entertainment, and news, all designed to give subscribers more control," along with "fully customizable multiview.”

Why new feature?

Mohan noted that the creator economy is another area of concern. According to YouTube's CEO, video producers will discover new revenue streams this year. The suggestions made include fan funding elements like jewelry and gifts, which will be included in addition to the current Super Chat, as well as shopping and brand bargains made possible by YouTube. 

YouTube's new venture

The business also hopes to grow YouTube Shopping, an affiliate program that lets content producers sell goods directly in their videos, shorts, and live streams. The business stated that it will implement in-app checkout in 2026, enabling users to make purchases without ever leaving the site.