Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Threat Actors Exploit GitHub as C2 in Multi-Stage Attacks Attacking Organizations in South Korea

GitHub attacked by state-sponsored hackers  Cyber criminals possibly linked with the Democratic People's Republic of Korea (DPRK) have b...

All the recent news you need to know

Microsoft 365 Accounts Targeted in Large Iran-Linked Cyber Campaign


A cyber operation believed to be linked to Iranian threat actors has been identified targeting Microsoft 365 environments, with a primary focus on organizations in Israel and the United Arab Emirates. The activity comes amid ongoing tensions in the Middle East and is still considered active.

According to research from Check Point, the campaign was carried out in three separate waves on March 3, March 13, and March 23, 2026. More than 300 organizations in Israel and over 25 in the U.A.E. were affected. Investigators also observed limited targeting in Europe, the United States, the United Kingdom, and Saudi Arabia.

The attackers focused on cloud-based systems used across a wide range of sectors, including government bodies, municipalities, transportation services, energy infrastructure, technology firms, and private companies. This broad targeting indicates an effort to access both public-sector systems and critical commercial operations.

The primary method used in the campaign is known as password spraying. In this technique, attackers attempt a small number of commonly used passwords across many accounts instead of repeatedly targeting a single account. This approach increases the chances of finding weak credentials while avoiding detection systems such as account lockouts or rate-limiting controls.

Security researchers noted that similar techniques have previously been associated with Iranian groups such as Peach Sandstorm and Gray Sandstorm. The current activity appears to follow a structured sequence. It begins with large-scale scanning and password attempts routed through Tor exit nodes to conceal the origin of the traffic. This is followed by login attempts, and in successful cases, the extraction of sensitive data, including email content from compromised accounts.

Analysis of Microsoft 365 logs revealed patterns consistent with earlier operations attributed to Gray Sandstorm. Investigators observed the use of red-team style tools and infrastructure, as well as commercial VPN services linked to hosting providers previously associated with Iran-linked cyber activity in the region.

To reduce risk, organizations are advised to monitor sign-in activity for unusual patterns, restrict authentication based on geographic conditions, enforce multi-factor authentication for all users, and enable detailed audit logs to support investigation in the event of a breach.


Renewed Activity from Pay2Key Ransomware Operation

In a related development, a U.S.-based healthcare organization was targeted in late February 2026 by Pay2Key, an Iran-linked ransomware group with connections to a broader threat cluster known by multiple aliases. The group operates under a ransomware-as-a-service model and was first identified in 2020.

The version used in this attack represents an upgrade from campaigns observed in July 2025, incorporating improved techniques for evasion, execution, and anti-forensic activity. Reports from Beazley Security and Halcyon indicate that no data was exfiltrated in this instance, marking a shift away from the group’s earlier double-extortion strategy.

The intrusion is believed to have begun through an unknown access point. Attackers then used legitimate remote access software such as TeamViewer to establish a foothold. From there, they harvested credentials to move laterally across the network, disabled Microsoft Defender Antivirus by falsely indicating that another antivirus solution was active, and interfered with system recovery processes. The attackers then deployed ransomware, issued a ransom note, and cleared logs to conceal their activity.

Notably, logs were deleted at the end of the attack rather than at the beginning, ensuring that even the ransomware’s own actions were removed, making forensic analysis more difficult.

The group has also adjusted its affiliate model, offering up to 80 percent of ransom payments, compared to 70 percent previously, particularly for attacks aligned with geopolitical objectives. In addition, a Linux variant of the ransomware has been identified in the wild. This version is configuration-driven, requires root-level access to execute, and is designed to navigate file systems, classify storage mounts, and encrypt data using the ChaCha20 encryption algorithm in either full or partial modes.

Before encryption begins, the malware weakens system defenses by stopping services, terminating processes, disabling security frameworks such as SELinux and AppArmor, and setting up a scheduled task to execute after system reboot. These steps allow the ransomware to run more efficiently and persist even after restarts.

Further developments point to coordination among pro-Iranian cyber actors. In March 2026, operators associated with another ransomware strain encouraged affiliates to adopt an alternative tool known as Baqiyat 313 Locker, also referred to as BQTLock, due to a surge in participation requests. This ransomware, which operates with pro-Palestinian motives, has been used in attacks targeting the U.A.E., the United States, and Israel since July 2025.

Cybersecurity experts note that Iran has a long history of using cyber operations as a response to political tensions. Increasingly, ransomware is being integrated into these efforts, blurring the line between financially motivated cybercrime and state-aligned cyber activity. Organizations need to adopt continuous monitoring, strong authentication measures, and proactive defense strategies to counter emerging threats.

AI Datacenter Boom Triggers Global CPU and Memory Shortages, Driving Price Hikes

 

Spurred by growing reliance on artificial intelligence, computing hardware networks are pushing chip production to its limits - shortages once limited to memory chips now affect core processors too. Because demand for AI-optimized facilities keeps climbing, industry leaders say delivery delays and cost increases may linger well into the coming decade. 

Now coming into view, top chip producers like Intel and AMD face difficulty keeping up with processor needs. Because of tighter supplies, computer and server builders get fewer chips than ordered - slowing assembly processes down. This gap pushes shipment timelines further out while lifting prices by roughly one-tenth to slightly more than an eighth. With supply trailing behind, companies brace for longer waits and steeper costs. Heavy demand has pushed key tech suppliers like Dell and HP to report deeper shortages lately. Server parts now take months rather than weeks to arrive - delays once rare are becoming routine. 

Into early 2026, experts expect disruptions to grow worse, stretching stress across business systems and home buyers alike. With CPU availability shrinking, pressure grows on a memory market already strained. Because of rising AI-driven datacenter projects, need for DRAM and NAND has jumped sharply - shifting production lines from devices like smartphones and laptops. This shift means newer tech such as DDR5 costs more than before, making upgrades less appealing. People now hold onto older machines longer, especially those running DDR4, simply because replacing them feels too costly. 

Nowhere is the strain more visible than in everyday device markets. Higher expenses for parts translate directly into steeper price tags on laptops, along with slower release cycles. Take Valve - their Linux-powered compact desktop hit pause, held back by material shortages. On another front, Micron stepped away from selling memory modules to regular users, focusing instead on large-scale computing and artificial intelligence needs. Shifts like these reveal where attention now lies within the sector. 

Facing growing challenges, legacy chip producers watch as new players step in. Not far behind, Arm launches its debut self-designed CPU, built specifically for artificial intelligence tasks. Demand was lacking - now it's shifting. Big names like Meta, Cloudflare, OpenAI, and Lenovo are paying attention, drawn by fresh potential. Change arrives quietly, then spreads. 

Facing ongoing shortages, market projections point to extended disruptions through the 2030s - altering how prices evolve while shifting the rhythm of technological advances in chips and computing systems.

Judge Blocks Pentagon's Retaliatory AI Ban on Anthropic

 

A federal judge has temporarily halted the Pentagon's effort to designate AI company Anthropic as a supply chain risk, ruling that the move appeared driven by retaliation rather than legitimate security concerns. In a 48-page order, U.S. District Judge Rita Lin, appointed by former President Joe Biden, granted Anthropic a preliminary injunction against 17 federal agencies, including the Pentagon, preventing them from enforcing the ban until the lawsuit concludes. This keeps Anthropic's Claude AI accessible to government users amid escalating tensions over military contracts. 

The conflict erupted during negotiations to expand a $200 million Pentagon contract with Anthropic. Anthropic refused proposed language permitting "all lawful use" of its AI, citing risks like mass surveillance or autonomous weapons—a stance CEO Dario Amodei publicly emphasized. In response, President Donald Trump posted on Truth Social on February 27 directing agencies to "IMMEDIATELY CEASE all use of Anthropic’s technology," while Defense Secretary Pete Hegseth announced on X that no military partners could engage with the firm. 

On March 4, the administration formalized the designation under two statutes: 41 USC 4713 for federal-wide restrictions and 10 USC 3252 for Defense Department-specific actions. Anthropic swiftly filed lawsuits in California's Northern District and the DC Circuit, arguing the labels were pretextual punishment for its ethical safeguards. Judge Lin agreed, noting the government's shift from contract disputes to broad bans suggested improper motives. 

Pentagon Chief Technology Officer Emil Michael countered on X that Lin's order contained "dozens of factual errors" and insisted the 41 USC 4713 designation remains in effect, as it falls outside her jurisdiction . Anthropic welcomed the swift ruling, reaffirming its commitment to safe AI while awaiting DC Circuit decisions. Legal experts are split: some see the injunction as limited, potentially leaving parts of the ban intact. 

This case underscores deepening rifts between AI firms and the government over technology controls in national security.It raises questions about executive power to penalize contractors, the role of public statements in legal proceedings, and AI deployment ethics amid rapid advancements. As appeals loom in the 9th Circuit, the dispute could drag on for years, impacting federal AI adoption and Anthropic's partnerships.

Mistral Debuts New Open Source Model for Realistic Speech Generation



Rather than function as a conventional transcription engine, Mistral's latest release represents a significant evolution beyond its earlier text-focused systems by expanding its open-weight philosophy into the increasingly complex domain of speech generation. As an alternative to acting as a conventional transcription engine, this model is designed to produce fluid, human-like audio and to maintain real-time conversational exchanges in a responsive manner.

AI has undergone a major transformation as a result of this progression from a passive, processed form of information to an active, voice-enabled participant capable of navigating linguistic nuances and contextual variation as a voice-enabled participant. This shift indicates that interaction paradigms have changed in a more profound way.

AI systems have been largely limited in their interaction with users through text-based interfaces, where responsiveness and usability are largely governed by written input and output. Advances in speech synthesis have resulted in a more natural interface layer for human-machine communication that reduces friction and expands accessibility across diverse user groups. 

In the field of intelligent systems, voice has become a central component of the user interaction process, not just a supplementary feature. The combination of technical sophistication and accessibility distinguishes Mistral’s approach. By using Mistral's open-weight framework instead of proprietary APIs and centralized infrastructures, developers will be able to redistribute control of their voice technologies. 

Organizations can deploy, adapt, and extend voice capabilities within their own environments, thereby transforming the pace and direction of voice-driven AI innovation in fundamental ways. Through lowering the barriers associated with high-fidelity speech synthesis, the model provides an opportunity for broader experimentation and customization by the user. 

A notable inflection point has been reached with the introduction of text-to-speech capabilities in this framework. Developers are now able to create fully interactive, voice-enabled agents by integrating natural-sounding audio directly into conversational architectures. 

In addition to static, text-based responses, these systems offer dynamic engagement across a broad range of applications, including assistive technologies, multilingual accessibility solutions, real-time virtual assistants, and interactive multimedia presentations. In addition to the ability to fine-tune parameters such as latency, tone, and contextual awareness, these systems are also extremely adaptable to specific applications. 

Mistral's architecture places a high emphasis on efficiency and portability, and is engineered to operate within constrained computing environments. This model can be deployed on smartphones, wearables, and edge hardware without the need for continuous cloud connections, making it suitable for deployment on such devices. 

With the localized inference capability, latency is reduced, data privacy is enhanced, and operational continuity is guaranteed in bandwidth-limited or offline settings. This approach directly challenges the prevailing reliance on centralized processing models that constitute the majority of voice AI products today. 

Using this architecture, Mistral differentiates itself from established providers such as ElevenLabs, which utilize API-based access and cloud-based infrastructure as a foundation for their offerings. The Mistral platform offers on-device processing as well as addressing growing concerns regarding data sovereignty and dependence on external providers by improving performance efficiency. 

Especially relevant to organizations operating in regulated industries, where sensitive voice data is transmitted using third-party systems posing compliance and security risks, this distinction is of particular importance. 

While detailed specifications of the model remain limited, early indications suggest that the model has been optimized through strategies such as structured pruning, low-bit quantization, and architectural refinement, which results in a highly optimized parameter footprint. In this approach, performance is maximized without the need for extensive computational infrastructure, which was previously demonstrated in models such as Mistral 7B. 

With this approach, a lightweight, deployable AI solution is developed that balances capability and efficiency, aligning with the industry's general trend toward lightweight, deployable artificial intelligence solutions. Moreover, the significance of this development extends beyond technical performance; it represents the convergence of speech generation with adjacent AI capabilities, such as language understanding, multimodal perception, and language understanding.

By integrating voice, contextual signals, and environmental inputs into future systems, these domains will likely be processed simultaneously, enabling more sophisticated and context-aware interactions as they continue to integrate. It is clear that Mistral's trajectory is closely connected to its founding vision, which is that it aims to develop intelligent systems capable of operating seamlessly across real-world scenarios.

By emphasizing modularity, transparency, and deploymentability, the company positioned itself as an alternative to vertically integrated AI ecosystems. Using AI systems, organizations will be able to gain greater control over the infrastructure and data they use, a concept that becomes increasingly critical as sensitive modalities, such as voice, begin to be processed by AI systems. 

As spoken interactions present a greater complexity in terms of identity, intent, and compliance, localized and customized solutions are becoming increasingly valuable. The application of AI technologies has been gaining traction as enterprises navigate the operational and regulatory implications. 

Especially in regions in which data sovereignty is an important issue, especially in Europe, the ability to run and fine-tune models within controlled environments offers a compelling alternative to cloud-based solutions. This approach may be very beneficial to sectors such as finance, healthcare, and public administration, where strict data governance requirements make external processing unfeasible.

In addition to speech synthesis, Mistral's broader AI stack contains a critical layer that enables the development of real-time systems capable of listening, reasoning, and responding. In addition to providing customer support and multilingual communication, this integrated capability provides an enhanced platform for delivering interactive digital platforms, which represents a significant competitive advantage in these contexts. 

Several years of improvements in model optimization underpin this technological advancement. Due to the computational requirements associated with real-time audio synthesis, speech generation systems initially relied heavily on cloud infrastructure. 

In recent years, innovations have significantly reduced model size while maintaining high output quality by implementing neural architecture design, pruning techniques, and quantization techniques. 

Consequently, on-device deployment has become increasingly feasible, shifting the emphasis from raw computational power to adaptability and efficiency. With the advancement of expectations, performance is no longer solely characterized by accuracy but is also measured by responsiveness, continuity, and seamless integration of artificial intelligence into everyday life.

Through natural modalities such as speech, users are increasingly engaging with systems directly rather than through interfaces. As a foundation for next-generation computing, edge-native, voice-enabled artificial intelligence is emerging as a core component. 

Mistral’s latest release should therefore be understood not as a mere update, but as part of a broader structural shift in artificial intelligence. Those factors reflect an increasing emphasis on openness, efficiency, and user-centered design when shaping AI systems in the future. Mistral has contributed to the movement toward more distributed, adaptable, and resilient AI ecosystems by extending its capabilities into speech while maintaining its commitment to accessibility and control. 

Human interaction with machines is likely to be reshaped by the convergence of speech, language, and contextual intelligence in the years ahead. It is anticipated that systems will no longer respond to commands, but rather will engage in fluid and ongoing dialogues resembling natural communication, as well. 

This emerging landscape positions Mistral at the forefront of a transformation that is essentially experiential rather than technological, reshaping the boundaries of interaction in an increasingly voice-driven environment.

Microsoft 365 Phishing Bypasses MFA via OAuth Device Codes

 

A recent wave of phishing attacks is bypassing traditional security protections on Microsoft 365, even when multi‑factor authentication (MFA) is enabled. Instead of stealing passwords directly, attackers are abusing legitimate Microsoft login flows to trick users into granting access to their own accounts, effectively sidestepping the security codes that many organizations rely on for protection. These campaigns have already compromised hundreds of organizations, highlighting how modern phishing has evolved beyond simple fake login pages into sophisticated, session‑based attacks. 

The core technique leverages Microsoft’s OAuth 2.0 device authorization flow, a feature designed for devices like printers and TVs that cannot display a full browser. Users receive a phishing email or SMS that looks like a legitimate Microsoft prompt, often claiming that a “secure authorization code” must be entered on a Microsoft login page. When the victim goes to the real Microsoft domain and inputs the code, they quietly grant an attacker‑controlled application long‑lived OAuth tokens that provide full access to their Microsoft 365 mailbox, OneDrive, and Teams. 

Because the login happens on an actual Microsoft site, common phishing filters and user instincts often fail to detect anything unusual. The attacker never needs to capture a password or intercept an SMS code; they simply harvest the access and refresh tokens issued by Microsoft after the user completes MFA. This means that even changing passwords or waiting for a code to expire does not immediately cut off the attacker, since the stolen tokens can persist for extended periods unless explicitly revoked. 

From there, threat actors typically move laterally inside the environment, reading sensitive emails, staging more phishing messages to contacts and colleagues, and sometimes preparing for business email compromise or invoice fraud. In some cases, compromised accounts are used to send follow‑up phishing emails that appear to come from within the organization, making them harder to flag and more likely to succeed. This “inside‑out” style of attack undermines trust in internal communications and can significantly slow down detection and response. 

To counter these threats, organizations must go beyond standard MFA and focus on identity‑centric protections, including conditional access policies, risky‑sign‑in monitoring, and regular review of granted OAuth applications. Users should be trained to treat any unexpected authorization or device‑code request as suspicious, especially if they did not initiate a login, and to report such messages immediately. Combining strong technical controls with continuous security awareness remains the most effective way to reduce the risk of these advanced phishing campaigns on Microsoft 365.

New RBI Rule Makes 2FA Mandatory for All Digital Payments


Two-factor authentication (2FA) will be required for all digital transactions under the new framework, drastically altering how customers pay with cards, mobile wallets, and UPI.

India plans to change its financial landscape as the Reserve Bank of India (RBI) brings new security measures for all electronic payments. The new rules take effect on 1 April 2026. Every digital payment will be verified through a compulsory two-factor authentication process. The new rule aims to address the growing number of cybercrimes and phishing campaigns that have infiltrated India’s mobile wallets and UPI. Traditionally, security relied on text messages, but now, it has started adopting a versatile security model. The regulators are trying to stay ahead of threat actors and scammers. 

The shift to a dynamic verification model

The new directive mandates that at least one of the two authentication factors must be dynamic. The authentication has to be generated particularly for a single transaction and cannot be used twice. Fintech providers and banks can now freely choose from a variety of ways, such as hardware tokens, biometrics, and device binding. This shift highlights a departure from the traditional era, where OTPs via SMS were the main line of defence. 

Risk-based verification

To make security convenient, banks will follow a risk-based approach. 

Low-risk: Payments from authorized devices or standard small transactions will be quick and seamless. 

High-risk: Big payments or transactions from new devices may prompt further authentication steps.

The framework with “RBI’s new digital payment security controls coming into force represent a significant recalibration of India’s authentication framework – from a prescriptive OTP-based regime to a more principle-driven, risk-based standard,” experts said.

Building institutions via technology neutrality

The RBI no longer manages the particular technology used for verification. Currently, it focuses more on the security of the outcome. 

Why the technology-neutral stance?

The technology-neutral stance permits financial institutions to use sophisticated solutions like passkeys or facial recognition without requiring frequent regulatory notifications. The central bank will follow the principle-driven practice by boosting innovation while holding strict compliance. According to experts, “By recognising biometrics, device-binding and adaptive authentication, RBI has created interpretive flexibility for regulated entities, while retaining supervisory oversight through outcome-based compliance.”

Impact on bank accountability

The RBI has increased accountability standards, making banks and payment companies more accountable for maintaining safe systems.

Institutions may be obliged to reimburse users in situations when fraud results from system malfunctions or errors, which could expedite the resolution of grievances.

The goal of these regulations is to expedite the resolution of complaints pertaining to fraud.

Featured