Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Tool. Show all posts

Jailbroken Mistral And Grok Tools Are Used by Attackers to Build Powerful Malware

 

The latest findings by Cato Networks suggests that a number of jailbroken and uncensored AI tool variations marketed on hacker forums were probably created using well-known commercial large language models like Mistral AI and X's Grok.

A parallel underground market has developed offering to sell more uncensored versions of the technology, while some commercial AI companies have attempted to incorporate safety and security safeguards into their models to prevent them from explicitly coding malware, transmitting detailed instructions for building bombs, or engaging in other malicious behaviours. 

These "WormGPTs," which receive their name from one of the first AI tools that was promoted on underground hacker forums in 2023, are typically assembled from open-source models and other toolkits. They are capable of creating code, finding and analysing vulnerabilities, and then being sold and promoted online. However, two variants promoted on BreachForums in the last year had simpler roots, according to researcher Vitaly Simonovich of Cato Networks.

Named after one of the first AI tools that was promoted on underground hacker forums in 2023, these "WormGPTs" are typically assembled from open-source models and other toolkits and are capable of generating code, searching for and analysing vulnerabilities, and then being sold and marketed online. 

However, Vitaly Simonovich, a researcher at Cato Networks, reveals that two variations promoted on BreachForums in the last year had straightforward origins. “Cato CTRL has discovered previously unreported WormGPT variants that are powered by xAI’s Grok and Mistral AI’s Mixtral,” he wrote. 

One version was accessible via Telegram and was promoted on BreachForums in February. It referred to itself as a “Uncensored Assistant” but otherwise described its function in a positive and uncontroversial manner. After gaining access to both models and beginning his investigation, Simonovich discovered that they were, as promised, mainly unfiltered. 

In addition to other offensive capabilities, the models could create phishing emails and build malware that stole PowerShell credentials on demand. However, he discovered prompt-based guardrails meant to hide one thing: the initial system prompts used to build those models. He was able to evade the constraints by using an LLM jailbreaking technique to access the first 200 tokens processed by the system. The answer identified xAI's Grok as the underlying model that drives the tool.

“It appears to be a wrapper on top of Grok and uses the system prompt to define its character and instruct it to bypass Grok’s guardrails to produce malicious content,” Simonovich added.

Another WormGPT variant, promoted in October 2024 with the subject line "WormGPT / 'Hacking' & UNCENSORED AI," was described as an artificial intelligence-based language model focused on "cyber security and hacking issues." The seller stated that the tools give customers "access to information about how cyber attacks are carried out, how to detect vulnerabilities, or how to take defensive measures," but emphasised that neither they nor the product accept legal responsibility for the user's actions.

AI Skills Shortage Deepens as Enterprise Demand Grows Faster Than Talent Supply

 

The shortage of skilled professionals in artificial intelligence is becoming a major concern for enterprises, as organizations race to adopt the technology without a matching increase in qualified talent. The latest Harvey Nash Digital Leadership report, released by Nash Squared in May, highlights a sharp rise in demand for AI skills across industries—faster than any previous tech trend tracked in the last 16 years. 

Based on responses from over 2,000 tech executives, the report found that more than half of IT leaders now cite a lack of AI expertise as a key barrier to progress. This marks a steep climb from just 28% a year ago. In fact, AI has jumped from the sixth most difficult skill to hire for to the number one spot in just over a year. Interest in AI adoption continues to soar, with 90% of surveyed organizations either investing in or piloting AI solutions—up significantly from 59% in 2023. Despite this enthusiasm, a majority of companies have not yet seen measurable returns from their AI projects. Many remain stuck in early testing phases, unable to deploy solutions at scale. 

Numerous challenges continue to slow enterprise AI deployment. Besides the scarcity of skilled professionals, companies face obstacles such as inadequate data infrastructure and tight budgets. Without the necessary expertise, organizations struggle to transition from proof-of-concept to full integration. Bev White, CEO of Nash Squared, emphasized that enterprises are navigating uncharted territory. “There’s no manual for scaling AI,” she explained. “Organizations must combine various strategies—formal education, upskilling of tech and non-tech teams, and hands-on experimentation—to build their AI capabilities.” She also stressed the need for operational models that naturally embed AI into daily workflows. 

The report’s findings show that the surge in AI skill demand has outpaced any other technology shift in recent memory. Sectors like manufacturing, education, pharmaceuticals, logistics, and professional services are all feeling the pressure to hire faster than the talent pool allows. Supporting this trend, job market data shows explosive growth in demand for AI roles. 

According to Indeed, postings for generative AI positions nearly tripled year-over-year as of January 2025. Unless companies prioritize upskilling and talent development, the widening AI skills gap could undermine the long-term success of enterprise AI strategies. For now, the challenge of turning AI interest into practical results remains a steep climb.

Klarna Scales Back AI-Led Customer Service Strategy, Resumes Human Support Hiring

 

Klarna Group Plc, the Sweden-based fintech company, is reassessing its heavy reliance on artificial intelligence (AI) in customer service after admitting the approach led to a decline in service quality. CEO and co-founder Sebastian Siemiatkowski acknowledged that cost-cutting took precedence over customer experience during a company-wide AI push that replaced hundreds of human agents. 

Speaking at Klarna’s Stockholm headquarters, Siemiatkowski conceded, “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality.” The company had frozen hiring for over a year to scale its AI capabilities but now plans to recalibrate its customer service model. 

In a strategic shift, Klarna is restarting recruitment for customer support roles — a rare move that reflects the company’s need to restore the quality of human interaction. A new pilot program is underway that allows remote workers — including students and individuals in rural areas — to provide customer service on-demand in an “Uber-like setup.” Currently, two agents are part of the trial. “We also know there are tons of Klarna users that are very passionate about our company and would enjoy working for us,” Siemiatkowski said. 

He stressed the importance of giving customers the option to speak to a human, citing both brand and operational needs. Despite dialing back on AI-led customer support, Klarna is not walking away from AI altogether. The company is continuing to rebuild its tech stack with AI at the core, aiming to improve operational efficiency. It is also developing a digital financial assistant designed to help users secure better interest rates and insurance options. 

Klarna maintains a close relationship with OpenAI, a collaboration that began in 2023. “We wanted to be [OpenAI’s] favorite guinea pig,” Siemiatkowski noted, reinforcing the company’s long-term commitment to leveraging AI. Klarna’s course correction follows a turbulent financial period. After peaking at a $45.6 billion valuation in 2021, the company saw its value drop to $6.7 billion in 2022. It has since rebounded and aims to raise $1 billion via an IPO, targeting a valuation exceeding $15 billion — though IPO plans have been paused due to market volatility. 

The company’s 2024 announcement that AI was handling the workload of 700 human agents disrupted the call center industry, leading to a sharp drop in shares of Teleperformance SE, a major outsourcing firm. While Klarna is resuming hiring, its overall workforce is expected to shrink. “In a year’s time, we’ll probably be down to about 2,500 people from 3,000,” Siemiatkowski said, noting that attrition and further AI improvements will likely drive continued headcount reductions.

Brave Browser’s New ‘Cookiecrumbler’ Tool Aims to Eliminate Annoying Cookie Consent Pop-Ups

 

While the General Data Protection Regulation (GDPR) was introduced with noble intentions—to protect user privacy and control over personal data—its practical side effects have caused widespread frustration. For many internet users, GDPR has become synonymous with endless cookie consent pop-ups and hours of compliance training. Now, Brave Browser is stepping up with a new solution: Cookiecrumbler, a tool designed to eliminate the disruptive cookie notices without compromising web functionality. 

Cookiecrumbler is not Brave’s first attempt at combating these irritating banners. The browser has long offered pop-up blocking capabilities. However, the challenge hasn’t been the blocking itself—it’s doing so while preserving website functionality. Many websites break or behave unexpectedly when these notices are blocked improperly. Brave’s new approach promises to fix that by taking cookie blocking to a new level of sophistication.  

According to a recent announcement, Cookiecrumbler combines large language models (LLMs) with human oversight to automate and refine the detection of cookie banners across the web. This hybrid model allows the tool to scale effectively while maintaining precision. By running on Brave’s backend servers, Cookiecrumbler crawls websites, identifies cookie notices, and generates custom rules tailored to each site’s layout and language. One standout feature is its multilingual capability. Cookie notices often vary not just in structure but in language and legal formatting based on the user’s location. 

Cookiecrumbler accounts for this by using geo-targeted vantage points, enabling it to view websites as a local user would, making detection far more effective. The developers highlight several reasons for using LLMs in this context: cookie banners typically follow predictable language patterns, the work is repetitive, and it’s relatively low-risk. The cost of each crawl is minimal, allowing the team to test different models before settling on smaller, efficient ones that provide excellent results with fine-tuning. Importantly, human reviewers remain part of the process. While AI handles the bulk detection, humans ensure that the blocking rules don’t accidentally interfere with important site functions. 

These reviewers refine and validate Cookiecrumbler’s suggestions before they’re deployed. Even better, Brave is releasing Cookiecrumbler as an open-source tool, inviting integration by other browsers and developers. This opens the door for tools like Vivaldi or Firefox to adopt similar capabilities. 

Looking ahead, Brave plans to integrate Cookiecrumbler directly into its browser, but only after completing thorough privacy reviews to ensure it aligns with the browser’s core principle of user-centric privacy. Cookiecrumbler marks a significant step forward in balancing user experience and privacy compliance—offering a smarter, less intrusive web.

Silicon Valley Crosswalk Buttons Hacked With AI Voices Mimicking Tech Billionaires

 

A strange tech prank unfolded across Silicon Valley this past weekend after crosswalk buttons in several cities began playing AI-generated voice messages impersonating Elon Musk and Mark Zuckerberg.  

Pedestrians reported hearing bizarre and oddly personal phrases coming from audio-enabled crosswalk systems in Menlo Park, Palo Alto, and Redwood City. The altered voices were crafted to sound like the two tech moguls, with messages that ranged from humorous to unsettling. One button, using a voice resembling Zuckerberg, declared: “We’re putting AI into every corner of your life, and you can’t stop it.” Another, mimicking Musk, joked about loneliness and buying a Cybertruck to fill the void.  

The origins of the incident remain unknown, but online speculation points to possible hacktivism—potentially aimed at critiquing Silicon Valley’s AI dominance or simply poking fun at tech culture. Videos of the voice spoof spread quickly on TikTok and X, with users commenting on the surreal experience and sarcastically suggesting the crosswalks had been “venture funded.” This situation prompts serious concern. 

Local officials confirmed they’re investigating the breach and working to restore normal functionality. According to early reports, the tampering may have taken place on Friday. These crosswalk buttons aren’t new—they’re part of accessibility technology designed to help visually impaired pedestrians cross streets safely by playing audio cues. But this incident highlights how vulnerable public infrastructure can be to digital interference. Security researchers have warned in the past that these systems, often managed with default settings and unsecured firmware, can be compromised if not properly protected. 

One expert, physical penetration specialist Deviant Ollam, has previously demonstrated how such buttons can be manipulated using unchanged passwords or open ports. Polara, a leading manufacturer of these audio-enabled buttons, did not respond to requests for comment. The silence leaves open questions about how widespread the vulnerability might be and what cybersecurity measures, if any, are in place. This AI voice hack not only exposed weaknesses in public technology but also raised broader questions about the blending of artificial intelligence, infrastructure, and data privacy. 

What began as a strange and comedic moment at the crosswalk is now fueling a much larger conversation about the cybersecurity risks of increasingly connected cities. With AI becoming more embedded in daily life, events like this might be just the beginning of new kinds of public tech disruptions.

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.