Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

DARWIS Taka: A Web Vulnerability Scanner with AI-Powered Validation

DARWIS Taka, a new web vulnerability scanner, is now available for free and runs via Docker. It pairs a rules-based scanning engine with an ...

All the recent news you need to know

Google Expands Gemini in Gmail, Forcing Billions to Reconsider Privacy, Control, and AI Dependence

 




Google has introduced one of the most extensive updates to Gmail in its history, warning that the scale of change driven by artificial intelligence may feel overwhelming for users. While some discussions have focused on surface-level changes such as switching email addresses, the company has emphasized that the real transformation lies in how AI is now embedded into everyday tools used by nearly two billion people. This shift requires far more serious attention.

At the center of this evolution is Gemini, Google’s artificial intelligence system, which is being integrated more deeply into Gmail and other core services. In a recent update shared through a short video message, Gmail’s product leadership acknowledged that the rapid pace of AI innovation can leave users feeling overloaded, with too many new features and decisions emerging at once.

Gmail has traditionally been built around convenience, scale, and seamless integration rather than strict privacy-first principles. Although its spam filters and malware detection systems are widely used and generally effective, they are not flawless. Importantly, Gmail has not typically been the platform users turn to for strong privacy assurances.

The introduction of Gemini changes this bbalance substantially. Google has clarified that it does not use email content to train its AI models. However, the way these tools function introduces new concerns. Features that automatically draft emails, summarize conversations, or search inbox content require access to emails that may contain highly sensitive personal or professional information.

To address this, Google describes Gemini as a temporary assistant that operates within a limited session. The company compares this interaction to allowing a helper into a private room containing your inbox. The assistant completes its task and then exits, with the accessed information disappearing afterward. According to Google, Gemini does not retain or learn from the data it processes during these interactions.

Despite these assurances, concerns remain. Even if the data is not stored long term, granting a cloud-based AI system access to private communications introduces an inherent level of risk. Additionally, while Google has denied automatically enrolling users into AI training programs, many of these AI-powered features are expected to be enabled by default. This shifts responsibility to users, who must actively decide how much access they are willing to allow.

This is not a decision that can be ignored. Once AI tools become integrated into daily workflows, they are difficult to remove. Relying on default settings or delaying action could result in long-term dependence on systems that users may not fully understand or control.

Shortly after promoting these updates, Gmail experienced a disruption that affected its core functionality. Users reported delays in sending and receiving emails, and Google acknowledged the issue while working on a fix. Initially, no estimated resolution time was provided. Later the same day, the company confirmed that the issue had been resolved.

According to Google’s official status update, the disruption was fixed on April 8, 2026, at 14:49 PDT. The cause was identified as a “noisy neighbor,” a term used in cloud computing to describe a situation where one service consumes excessive shared resources, negatively impacting the performance of others operating on the same infrastructure.

With a user base of approximately two billion, even a short-lived outage becomes of grave concern. More importantly, it emphasises the scale at which Gmail operates and reinforces why decisions around AI integration are critical for users worldwide.

The central issue now facing users is the balance between convenience and security. Google presents Gemini as a helpful and well-behaved assistant that enhances productivity without overstepping boundaries. However, like any guest given access to a private space, it requires clear rules and careful oversight.

This tension becomes even more visible when considering Google’s parallel efforts to strengthen security. The company recently expanded client-side encryption for Gmail on mobile devices. While this may sound similar to end-to-end encryption used in messaging apps, it is not the same. This form of encryption operates at an organizational level, primarily for enterprise users, and does not provide the same device-specific privacy protections commonly associated with true end-to-end encryption.

More critically, enabling this additional layer of encryption dynamically limits Gmail’s functionality. When it is turned on, several features become unavailable. Users can no longer use confidential mode, access delegated accounts, apply advanced email layouts, or send bulk emails using multi-send options. Features such as suggested meeting times, pop-out or full-screen compose windows, and sending emails to group recipients are also disabled.

In addition, personalization and usability tools are affected. Email signatures, emojis, and printing functions stop working. AI-powered tools, including Google’s intelligent writing and assistance features, are also unavailable. Other smart Gmail features are disabled, and certain mobile capabilities, such as screen recording and taking screenshots on Android devices, are restricted.

These limitations exist because encrypted data cannot be accessed by AI systems. As a result, users are forced to choose between stronger data protection and access to advanced features. The same mechanisms that secure information also prevent AI tools from functioning effectively.

This reflects a bigger challenge across the technology industry. Privacy and security measures often limit the capabilities of AI systems, which depend on access to data to operate. In Gmail’s case, these two priorities do not align easily and, in many ways, directly conflict.

From a wider perspective, this also highlights a fundamental limitation of email itself. The technology was developed in an earlier era and was not designed to handle modern cybersecurity threats. Its underlying structure lacks the robust protections found in newer communication platforms.

As artificial intelligence becomes more deeply integrated into everyday tools, users are being asked to make more informed and deliberate decisions about how their data is used. While Google presents Gemini as a controlled and temporary assistant, the responsibility ultimately lies with users to determine their comfort level.

For highly sensitive communication, relying solely on email may no longer be the safest option. Exploring alternative platforms with stronger built-in security may be necessary. Ultimately, this moment represents a critical choice: whether the convenience offered by AI is worth the level of access it requires.

CISO Burnout Is Costing Businesses More Than Money

 

Businesses are increasingly feeling the financial and operational impact of CISO burnout, as overstretched security leaders make slower decisions, miss critical signals, and eventually leave their roles. The pressure of rising cyber threats, regulatory demands, and limited resources is turning the CISO position into a high‑turnover, high‑cost liability rather than a strategic asset. 

Why CISOs are burning out 

CISOs today face an “always‑on” workload, with AI‑driven attacks, expanding digital estates, and constant audits leaving little room for rest. Many report chronic stress, decision fatigue, and missed family events, while still working well beyond contracted hours to keep up. Boards often understand the pressure in theory, but fail to translate this into better staffing, budgets, or clearer priorities.

When a burned‑out CISO resigns or takes extended leave, firms pay not only recruitment and onboarding costs, but also the hidden price of lost productivity and disrupted projects. One expert estimates total CISO replacement costs can exceed 200% of salary when incident‑related losses, staff turnover, and delayed IT initiatives are factored in. Incidents that might have been caught earlier are more likely to slip through, raising breach‑related expenses and reputational damage. 

Impact on security and board confidence 

Burnout erodes cyber resilience by weakening threat detection, slowing crisis‑time decisions, and degrading communication of risk to the board. As CISOs disengage, security can become an afterthought, initiatives stall, and internal morale in security teams drops. This visibly undermines confidence at the top, making it harder to secure long‑term investment in modern security programs.

To break the cycle, companies must invest in prevention: realistic job design, adequate headcount, clear mandates, and mental‑health support. Some firms are shifting toward fractional or portfolio‑style CISOs, spreading responsibility and reducing single‑point pressure. Firms that treat CISO well‑being as a core part of risk management will likely see better retention, stronger security posture, and lower overall breach‑related costs.

Anthropic AI Cyberattack Capabilities Raise Alarm Over Vulnerability Exploitation Risks

 

Now emerging: artificial intelligence reshapes cybersecurity faster than expected, yet evidence from Anthropic shows it might fuel digital threats more intensely than ever before. Recently disclosed results indicate their high-level AI does not just detect flaws in code - it proceeds on its own to take advantage of them. This ability signals a turning point, subtly altering what attacks may look like ahead. A different kind of risk takes shape when machines act without waiting. What worries experts comes down to recent shifts in how attacks unfold. 

One key moment arrived when Anthropic uncovered a complex spying effort. In that case, hackers - likely backed by governments - didn’t just plan with artificial intelligence; they let it carry out actions during the breach itself. That shift matters because it shows machine-driven systems now doing tasks once handled only by people inside digital invasions. Surprisingly, Anthropic revealed what its newest test model, Claude Mythos Preview, can do. The firm says it found countless serious flaws in common operating systems and software - flaws that stayed hidden for long stretches of time. Not just spotting issues, the system linked several weaknesses at once, building working attack methods, something usually done by expert humans. 

What stands out is how little oversight was needed during these operations. What stands out is how this combination - spotting weaknesses and acting on them - marks a notable shift. Not just incremental change, but something sharper: specialists like Mantas Mazeika point to AI-powered threats moving into uncharted territory, with automated systems ramping up attack frequency and reach. Another angle emerges through Allie Mellen's observation - the gap between detecting a flaw and weaponizing it shrinks fast under AI pressure, cutting response windows for companies down to almost nothing. Among the issues highlighted by Anthropic were lingering flaws in OpenBSD and FFmpeg - examples surfaced through the model’s analysis - alongside intricate sequences of exploitation targeting Linux servers. 

With such discoveries, questions grow about whether current defenses can match accelerating threats empowered by artificial intelligence. Now, Anthropic is holding back public access entirely. Access goes only to a select group of tech firms through a special program meant to spot weaknesses early. The move comes as others in tech worry just as much about misuse. Safety outweighs speed when the stakes involve advanced systems. Still, experts suggest such progress brings both danger and potential. Though risky, new tools might help uncover flaws early - shielding networks ahead of breaches. 

Yet success depends on collaboration: firms, officials, and digital defenders must reshape how they handle code fixes and protection strategies. Without shared initiative, gains could falter under old habits. Now shaping the digital frontier, advancing AI shifts how threats emerge and respond. With speed on their side, those aiming to breach systems find new openings just as quickly as protectors build stronger shields. Staying ahead means defense must grow not just faster, but smarter - matching each leap taken by adversaries before gaps widen.

Chrome Advances User Protection with new Infostealer Mitigation Features


 

Google Chrome has taken a significant step toward hardening browser-level authentication security in response to the growing threat landscape by introducing Device Bound Session Credentials in its latest Windows update. 

As part of Chrome 146, this mechanism has been developed to address a long-standing vulnerability in web session management by preventing authenticated sessions from being portable across devices. It is based on the use of hardware-backed trust anchors that bind session credentials directly to the user's machine, thereby significantly increasing the barrier to attackers attempting to reuse stolen authentication tokens. 

With the implementation of cryptographic safeguards at the device level, the update reflects a broader shift in browser security architecture towards reducing the impact of credential theft rather than merely addressing it. This foundation is the basis for Device Bound Session Credentials, which generate a unique public/private key pair within secure hardware components, such as the Trusted Platform Module of Windows systems, which is used to authenticate sessions.

By design, session credentials cannot be replicated or transferred even if they are compromised at the software layer, as these keys are not exportable. With the feature now available to Windows users, and Mac OS support expected in subsequent versions, it addresses the mechanics of modern session hijacking. 

A typical attack scenario involves the execution of malicious payloads which launch informationstealer malware, which harvests cookies stored on your browser or intercepts newly established sessions unknowingly. For example, LummaC2 is one of the prominent infostealer malware families. 

The persistence of these cookies often beyond a single login instance gives attackers a durable means of unauthorized access, bypassing traditional authentication controls such as passwords and multi-factor authentication systems, and allowing them to bypass these controls. 

In addition to disrupting the attack chain at a structural level, Chrome's latest enhancement also limits the reuse and monetization of stolen session data across threat actor ecosystems by cryptographically anchoring session validity to the originating device.

Initially introduced in 2024, the underlying security model combines authentication with hardware integrity in order to ensure that authentication is linked to a user identity as well as hardware integrity. By cryptographically assuring each active session with device-resident security components, such as the Trusted Platform Module on Windows and Secure Enclave on macOS, this is accomplished. 

The hardware-supported environment generates and safeguards asymmetric key pairs that are used to encrypt and validate session data, while the private key is strictly not transferable. Consequently, even if session artifacts such as cookies were to be extracted from the browser, they would not be capable of being reused on another system without the appropriate cryptographic context. 

By ensuring that session validity is intrinsically linked to the device that generated it, this design shifts the attack surface fundamentally. During the lifecycle of a session, the mechanism introduces an additional verification layer. It is essential for the browser to demonstrate possession of the private key associated with the short-lived session cookies to the server in order to grant and renew them. 

Rather than being a static token, each session is effectively a continuously validated cryptographic exchange. The system defaults to conventional session handling in environments without secure hardware support, preserving backward compatibility. 

Early telemetry indicates that the approach is already altering attacker economics by a measurable decline in session theft attempts. As part of the collaboration between Microsoft and the organization, the architecture is designed to evolve into an open web standard, while also incorporating privacy-centric safeguards. 

The use of device-specific, non-reusable keys prevents cross-site correlations of user activity by design, enhancing both security and privacy without adding additional tracking vectors to the system. The framework is designed to integrate easily with existing web architectures without imposing significant operational overhead upon service providers on an implementation level. 

Google Chrome assumes responsibility for key management, cryptographic validation, and dynamic cookie rotation for hardware-bound session security, resulting in minimal backend modification needed to implement hardware-bound session security. 

In this manner, the protocol maintains compatibility with traditional session handling models while simultaneously adding an additional layer of trust beneath them. Additionally, the protocol is designed according to strict principles of data minimization: only a per-session public key is shared for authentication, thus preventing the exposure of persistent device identifiers and minimizing the risk of cross-site tracking. 

Under the supervision of the World Wide Web Consortium and Microsoft, the Web Application Security Working Group has developed this open standard in consultation with identity platform providers such as Okta, ensuring interoperability across diverse authentication ecosystems. After a controlled deployment in 2025, early results indicate a significant decrease in session hijacking incidents. This reinforces our confidence in its broader rollout, which is now available for Windows in Chrome 146 and is anticipated for macOS in the near future. 

At the same time, development efforts are underway to extend capabilities to federated identity models, enable cross-origin key binding, and utilize existing trusted credentials, such as mutual TLS and hardware security keys, while exploring software-based alternatives to broaden enterprise adoption. Despite the introduction of hardware-based protections, adversarial adaptation has not been eliminated. 

There have been emerging bypass techniques targeted at Chrome's Application-Bound Encryption layer, largely through the misuse of internal debugging interfaces that were originally intended to facilitate the development and remote management of Chrome. It is possible to circumvent traditional safeguards by enabling remote debugging over designated ports, which enables attackers to extract cookies directly from the browser rather than resorting to more detectable methods such as memory scraping and process injection.

With regard to this method, observed with infostealer strains such as Phemedrone, it is comparatively stealthy since it takes advantage of legitimate browser functionality to evade conventional detection mechanisms. Browser processes initiated with debugging flags and anomalous activity targeting common ports such as 9222 are indications of compromise. 

The Application-Bound Encryption technology was initially adopted for Windows environments, however similar techniques have been demonstrated to bypass protections across macOS and Linux environments, as well as native credential storage systems. Despite the ongoing efforts to comprehensively attribute malware families, the underlying vector suggests an overall pattern of exploitation that could be replicated across the threat landscape if comprehensive attribution remains incomplete. 

As a result, security teams will note that there remains a persistent “cat-and-mouse” dynamic in identity and access management, in which defensive innovations are quickly countered with countermeasures. Within weeks of the initial release of the feature, bypass strategies were emerging, demonstrating the need to monitor continuously, harden configurations, and apply layered defense strategies in order to maintain session-based authentication integrity. 

The development illustrates the broader need for organizations to move beyond single-layer defenses and adopt a multi-tiered, multi-layered security posture. While hardware-bound session protection represents a significant advancement, its effectiveness ultimately depends on complementary controls across the environment. 

Consequently, security teams should enforce strict browser configurations, monitor for anomalous debugging activity, and restrict the access to remote management interfaces. Further reducing the window of exploitation can be achieved by integrating endpoint detection with identity-aware access controls, as well as shortening session lifespans and ensuring continuous authentication checks. 

The browser vendors should continue to refine these mechanisms, so enterprises should align their defensive strategies accordingly. Session security should be treated as an evolving discipline requiring ongoing vigilance and adaptive response, rather than a fixed safeguard.

Critical SGLang Vulnerability Allows Remote Code Execution via Malicious AI Model Files

 



A newly disclosed high-severity flaw in SGLang could enable attackers to remotely execute code on affected servers through specially crafted AI model files.

The issue, tracked as CVE-2026-5760, has received a CVSS score of 9.8 out of 10, placing it in the critical category. Security analysts have identified it as a command injection weakness that allows arbitrary code execution.

SGLang is an open-source framework built to efficiently run large language and multimodal models. Its popularity is reflected in its development activity, with more than 5,500 forks and over 26,000 stars on its public repository.

According to the CERT Coordination Center, the flaw affects the “/v1/rerank” endpoint. An attacker can exploit this functionality to run malicious code within the context of the SGLang service by using a specially designed GPT-Generated Unified Format (GGUF) model file.

The attack relies on embedding a malicious payload inside the tokenizer.chat_template parameter of the model file. This payload uses a server-side template injection technique through the Jinja2 templating engine and includes a specific trigger phrase that activates the vulnerable execution path.

Once the victim downloads and loads the model, often from repositories such as Hugging Face, the risk becomes active. When a request reaches the “/v1/rerank” endpoint, SGLang processes the chat template using its templating engine. At that moment, the injected payload is executed, allowing the attacker to run arbitrary Python code on the server and achieve remote code execution.

Security researcher Stuart Beck traced the root cause to unsafe template handling. Specifically, the framework uses a standard Jinja2 environment instead of a sandboxed configuration. Without isolation controls, untrusted templates can execute system-level code during rendering.

The attack unfolds in a defined sequence: a malicious GGUF model is created with an embedded payload; it includes a trigger phrase tied to the Qwen3 reranker logic located in “entrypoints/openai/serving_rerank.py”; the victim loads the model; a request hits the rerank endpoint; and the template is rendered using an unsafe environment, leading to execution of attacker-controlled Python code.

This vulnerability falls into the same class as earlier issues such as CVE-2024-34359, a critical flaw in llama_cpp_python, and CVE-2025-61620, which affected another model-serving system. These cases highlight a recurring pattern where unsafe template or model handling introduces execution risks.

To mitigate the issue, CERT/CC recommends replacing the current template engine configuration with a sandboxed alternative such as ImmutableSandboxedEnvironment. This would prevent execution of arbitrary Python code during template rendering. At the time of disclosure, no confirmed patch or vendor response had been issued.

From a broader security lens, this incident reinforces a growing concern in AI infrastructure. Model files are increasingly being treated as trusted inputs, despite their ability to carry executable logic. As adoption expands, organizations must validate external models, restrict execution environments, and continuously monitor inference systems to reduce the risk of compromise.

ChipSoft Ransomware Attack Disrupts Dutch Healthcare Systems and HiX EHR Services

 

A sudden cyberattack targeting ChipSoft triggered widespread interruptions in essential health IT operations throughout the Netherlands, leading officials to isolate key network segments. While public access tools went down, medical staff also lost functionality within core administrative environments - prompting urgent questions around resilience under pressure and protection of sensitive records. 

Because of the cyberattack, ChipSoft shut down multiple services such as Zorgportaal, HiX Mobile, and Zorgplatform to limit possible damage. Hospitals across the nation rely on ChipSoft's main system, HiX, making it a key player in digital medical records. As a result, clinics received warnings urging them to cut connections to ChipSoft platforms until safety is confirmed. Preventive steps like these aim to reduce risks while experts handle the breach. 

Later came confirmation via local news outlets, following early signals from public posts on the web. A company-issued message raised concern, citing signs of intrusion into operational systems. This notice hinted at data exposure without confirming full compromise. Not long afterward, official classification arrived: Z-CERT labeled it a ransomware event. Coordination across impacted health entities started under their guidance. Outages began spreading through several hospitals after the incident unfolded. Sint Jans Gasthuis in Weert felt effects early, followed by disruptions at Laurentius Hospital in Roermond. Digital tools slowed down or stopped working altogether at VieCuri Medical Center in Venlo. 

Flevo Hospital in Almere also saw restricted system availability soon afterward. Even though certain departments kept running, performance gaps between locations revealed deeper weaknesses. When cyber incidents strike, medical technology networks often struggle more than expected. Healthcare tech firms often serve many hospitals at once, making them prime targets for ransomware attacks. 

When one falls victim, consequences tend to ripple through linked facilities without warning. Patient treatment slows down, daily operations stumble, records become unreachable. Despite mentioning efforts to reduce harm, ChipSoft has shared little about what information might be exposed. Confirmation on how deep the breach goes remains absent so far. After this event came several earlier breaches across medical tech companies worldwide - proof of rising exposure. 

With hospitals shifting more operations online, criminals now zero in on those holding vast amounts of vital data. Sometimes it's not about speed but access; value draws attention over time. Systems once isolated now face constant probing from distant actors watching for gaps. Right now, work continues to regain control - officials alongside digital defense units are measuring harm while bringing services back online. 

This breach by ChipSoft highlights once more how vital strong cyber protections are within medical infrastructure, since short outages might lead to severe outcomes beyond screens.

Featured