Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

CISA Reveals New Details on RESURGE Malware Exploiting Ivanti Zero-Day Vulnerability

  The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has published fresh technical insights into RESURGE, a malicious implant...

All the recent news you need to know

Researchers Investigate AI Models That Can Interpret Fragmented Cognitive Signals


 

Despite being among the most complex and least understood systems in science for decades, the human brain continues to be one of the most complex and least understood. Advancements in brain-imaging technology have enabled researchers to observe neural activity in stunning detail, showing how different areas of the brain light up when a person listens, speaks, or processes information. However, the causes of these patterns have yet to be fully understood. 

Despite the fact that intricate waves of electrical signals and shifting clusters of brain activity indicate the brain is working, the deeper question of how these signals translate into meaning remains largely unresolved. Historically, neuroscientists, linguists, and psychologists have found it difficult to understand how the brain transforms words into coherent thoughts. 

Recent developments at the intersection of neuroscience and artificial intelligence are beginning to alter this picture for the better. As detailed recordings of brain activity are being analyzed using advanced deep learning techniques, researchers are revealing patterns suggesting that the human brain might interpret language in a manner similar to modern artificial intelligence models in terms of interpretation. 

As speech unfolds, rather than using rigid grammatical rules alone, the brain appears to build meaning gradually, layering context and interpretation as it unfolds. In a new perspective, this emerging concept offers insight into the mechanisms of human comprehension and may ultimately alter how scientists study language, cognition, and thought's neural foundations. 

The implications of this emerging understanding are already being explored in experimental clinical settings. In one such study, researchers observed the recovery of a participant following a stroke after experiencing severe speech impairments for nearly two decades. Despite remaining physically still, her subtle breathing rhythm was the only visible movement, yet she was experiencing complex neural activity beneath the surface. 

During silent speech, words appeared on a nearby screen, gradually combining into complete sentences that she was unable to convey aloud as she imagined speaking. As part of the study, the participant, 52-years-old T16, was implanted with a small array of electrodes located within the frontal regions of her brain responsible for language planning and motor speech control, which were monitored with an array of electrodes. 

A deep-learning system analyzed these signals and translated them into written text in near-real-time as she mentally rehearsed words using an implanted interface. As part of a broader investigation conducted by Stanford University, the same experimental framework was applied to additional volunteers with amyotrophic lateral sclerosis, a neurodegenerative condition. 

Through the integration of high-resolution neural recordings and machine learning models capable of recognizing complex activity patterns, the system attempted to reconstruct intended speech directly from brain signals based on the recorded signals. 

Even though the approach is still in experimental stages, it represents a significant breakthrough in brain-computer interface research aimed at converting internal speech into readable language. This research brings researchers closer to technologies that may one day allow individuals who have lost their ability to communicate to be able to communicate again.

The development of neural decoding goes beyond speech reconstruction and is also being explored simultaneously. A recent experiment at the Communication Science Laboratories of NTT, Inc in Japan has demonstrated that visual thoughts can be converted into written descriptions using a technique known as “mind captioning”. This approach, unlike earlier brain–computer interfaces that required participants to attempt or imagine speaking, emphasizes the interpretation of neural activity related to perception and memory.

The system can produce textual descriptions based on patterns in brain signals, giving a glimpse into how internal visual experiences can be translated into language without requiring physical communication. In order to develop the method, functional magnetic resonance imaging is combined with advanced language modeling techniques. 

Functional MRI can measure subtle changes in blood flow throughout the brain, enabling researchers to map neural responses as participants watch video footage and later recall those same scenes. As a result of these neural patterns, a pretrained language model is used to generate semantic representations, which encode relationships between concepts, objects and actions by utilizing numerical structures. 

This intermediary layer creates a link between raw brain activity and linguistic expressions by acting as an intermediary layer. As a result of the decoding model, observed neural signals are aligned with these semantic structures, while the resulting text is gradually refined by an artificial intelligence language model so that it reflects the meaning implicit in the recorded brain activity.

Experimental trials demonstrated that short video clips were often described in a way that captured the overall context, including interactions between individuals, objects, and environments. Although the system often misidentified a specific object, it often preserved the relationships or actions occurring in the scene even when the system misidentified the object. This indicates that the model was interpreting conceptual patterns rather than simply retrieving memorized phrases.

Furthermore, the process is not primarily dependent on the conventional language-processing regions of the brain. Rather than using sensory and cognitive activity as a basis for constructing meaningful descriptions, it interprets neural signals originating from areas that are involved in visual perception and conceptual understanding. This technology has implications beyond experimental neuroscience, in addition to enhancing human perception.

The development of systems that can translate perceptual or imagined experiences into language could lead to the development of new modes of communication for people suffering from severe neurological conditions, such as paralysis, aphasia, or degenerative diseases affecting their speech. At the same time, the possibility of utilizing technology to deduce internal mental content from neural data raises complex ethical issues. 

In the future, when it becomes easier to interpret brain activity, researchers and policymakers will need to consider how privacy, consent, and cognitive autonomy can be protected in an environment in which thoughts can, under certain conditions, be decoded. 

Increasingly sophisticated systems that can interpret neural signals and restore aspects of human thought are presenting researchers and ethicists with broader questions about how artificial intelligence may change the nature of human knowledge. 

According to scholars, if algorithmic systems are increasingly used as default intermediaries for information, understanding could gradually shift from direct human reasoning to automated interpretation as a consequence.

In this scenario, human judgement's traditional qualities - context awareness, critical doubt, ethical reflection, and interpretive nuance - may be eclipsed by the efficiency and speed of machine-generated responses. There is concern among some analysts that this shift may result in the creation of a new form of epistemic divide. 

There may be those individuals who continue to cultivate the cognitive discipline necessary to build knowledge through sustained attention, reflection, and analysis. Conversely, those individuals whose thinking processes are increasingly mediated by digital systems that provide answers on demand may also be affected.

The latter approach, while beneficial in many contexts, can improve productivity and speed up problem solving. However, overreliance on external computational tools may weaken the underlying habits of independent inquiry over time. 

It is likely that the implications would extend far beyond academic environments, influencing those who are capable of managing complex decisions, evaluating conflicting information, or generating truly original ideas rather than relying on pattern predictions generated by algorithms. 

It is noteworthy that, despite these concerns, experts emphasize that the most appropriate response to artificial intelligence is not the rejection of it, but rather the carefully designed social and systemic practices that maintain human cognitive agency. It is likely that educators, institutions, and policymakers will need to intentionally reintroduce intellectual effort that sustains deep thinking in the face of increasing friction caused by automated information retrieval and analytical tools. 

It is possible to encourage individuals to use their independent problem-solving skills before consulting digital tools in these learning environments, as well as evaluate their performance in these learning environments using methods that emphasize reasoning, revision, and reflection. The distinction between retrieval of knowledge and retrieval of information may be particularly relevant in this context.

Despite retrieval systems' ability to deliver information instantly, true understanding requires an explanation of concepts, their application to unfamiliar situations, and critical examination of the assumptions they are based on. These implications are particularly significant for the younger generations, whose cognitive habits are still developing. 

Researchers are increasingly emphasizing the importance of practicing activities that enhance concentration and independent thought. These activities include reading for sustained periods of time, writing without assistance, solving complex problems, and composing creative works that require patience and focus. It is essential that such activities continue in an environment in which information is almost effortless to access that they serve as forms of cognitive training. 

As neural decoding technologies and artificial intelligence-assisted cognition progress, it may ultimately prove just as important to preserve the human capacity for deliberate thought as it is to achieve technological breakthroughs. As a result of the lack of such a balance, the question is not whether intelligence would diminish, but whether the individual would gradually lose control over the process by which his or her own thoughts are formed. 

 Technological advancement and frameworks that guide the application of neural decoding and artificial intelligence-assisted cognition will determine the trajectory of neural decoding and AI-assisted cognition in the future. 

As the ability to interpret brain activity becomes more refined, researchers, clinicians, and policymakers will be required to develop clear safeguards that protect mental privacy while ensuring the technology serves a legitimate scientific or medical purpose. 

A comprehensive governance system, transparent research standards, and ethical oversight will play a central role in determining the integration of such tools into society. If neural interfaces and artificial intelligence-driven interpretation systems are developed responsibly, they can transform communication for patients with severe neurological impairments and provide greater insight into human behavior. 

In addition, it remains essential to maintain a clear boundary between assistance and intrusion, to ensure that advancements in decoding the brain ultimately enhance human autonomy rather than compromise it.

Chinese Threat Actors Attack Southeast Asian Military Targets via Malware


A China-based cyber espionage campaign is targeting Southeast Asian military targets. The state-sponsored campaign started in 2020. 

Palo Alto Networks Unit 42 has been tracking the campaign under the name CL-STA-1087. Here, CL means cluster, and STA means state-backed motivation. 

According to security experts Yoav Zemah and Lior Rochberger, “The activity demonstrated strategic operational patience and a focus on highly targeted intelligence collection, rather than bulk data theft. The attackers behind this cluster actively searched for and collected highly specific files concerning military capabilities, organizational structures, and collaborative efforts with Western armed forces.”

About the campaign

The campaign shows traces commonly linked with APT campaigns, such as defense escape tactics, tailored delivery methods, custom payload deployment, and stable operational infrastructure to aid sustained access to hacked systems.

MemFun and AppleChris

Threat actors used tools such as backdoors called MemFun and AppleChris, and a credential harvester called Getpass. Experts found the hacking tools after finding malicious PowerShell execution that allowed the script to go into a sleep state and then make reverse shells to a hacker-controlled C2 server. Experts don't know about the exact initial access vector. 

About the attack sequence

The compromise sequence deploys AppleChris’ different versions across victim endpoints and moves laterally to avoid detection. Hackers were also found doing searches for joint military activities, detailed assessments of operational capabilities, and official meeting records. The experts said that the “attackers showed particular interest in files related to military organizational structures and strategy, including command, control, communications, computers, and intelligence (C4I) systems.”

MemFun and AppleChris are designed to access a shared Pastebin account that serves as a dead-drop resolver to retrieve the real C2 address in Base64-encoded format. An AppleChris version also depends on Dropbox to fetch the C2 details via the Pastebin approach, kept as a backup option. Installed via DLL hijacking, AppleChris contacts the C2 server to receive commands to perform drive enumeration and related tasks. 

According to Unit 42, “To bypass automated security systems, some of the malware variants employ sandbox evasion tactics at runtime. These variants trigger delayed execution through sleep timers of 30 seconds (EXE) and 120 seconds (DLL), effectively outlasting the typical monitoring windows of automated sandboxes.”

Debunking the Myth of “Military‑Grade” Encryption

 

Military-grade encryption sounds impressive, but in reality it is mostly a marketing phrase used by VPN providers to describe widely available, well‑tested encryption standards like AES‑256 rather than some secret military‑only technology. The term usually refers to the Advanced Encryption Standard with a 256‑bit key (AES‑256), a symmetric cipher adopted as a US federal standard in 2001 to replace the older Data Encryption Standard. 

AES turns readable data into random‑looking ciphertext using a shared key, and the 256‑bit key length makes brute‑force attacks computationally infeasible for any realistic adversary. Because the same key is used for both encryption and decryption, AES is paired with slower asymmetric algorithms such as RSA during the VPN handshake so the symmetric key can be exchanged securely over an untrusted network. Once that key is agreed, your traffic flows efficiently using AES while still benefiting from the secure key exchange provided by public‑key cryptography.

Calling this setup “military‑grade” is misleading because it implies special, restricted technology, when in fact AES‑256 is an open, publicly documented standard used by governments, banks, corporations, and everyday internet services alike. Any competent developer can implement AES‑256, and your browser and many apps already rely on it to protect logins and other sensitive data as it traverses the internet. In practical terms, the same class of algorithm that safeguards classified government communications also secures routine tasks like online banking or cloud storage. VPN marketing leans on the phrase because “AES‑256 with a 256‑bit key” means little to non‑experts, while “military‑grade” instantly conveys strength and trustworthiness.

Strong encryption is not overkill reserved for spies; it matters for everyday users whose online activity constantly generates data trails across sites and apps. That information is monetized for targeted advertising and exposed in breaches that can enable phishing, identity theft, or other fraud, even if you believe you have nothing to hide. Location histories, financial records, and health details are all highly sensitive, and the risks are even greater for journalists, activists, or people living under repressive regimes where surveillance and censorship are common. For them, robust encryption is essential, often combined with obfuscation and multi‑hop VPN chains to conceal VPN usage and add layers of protection if an exit server is compromised.

Ultimately, a VPN without strong encryption offers little real security, whether you are using public Wi‑Fi or simply trying to keep your ISP and advertisers from building detailed profiles about you. AES‑256 remains a widely trusted choice, but modern VPNs may also use alternatives like ChaCha20 in protocols such as WireGuard, which, although not a NIST standard, has been thoroughly audited and is considered secure. The important point is not the “military‑grade” label but whether the service implements proven, well‑reviewed cryptography correctly and combines it with privacy‑preserving features that match your threat model.

Shadow AI Risks Rise as Employees Use Generative AI Tools at Work Without Oversight

 

With speed surprising even experts, artificial intelligence now appears routinely inside office software once limited to labs. Because uptake grows faster than oversight, companies care less about who uses AI and more about how safely it runs. 

Research referenced by security specialists suggests that roughly 83 percent of UK workers frequently use generative artificial intelligence for everyday duties - finding data, condensing reports, creating written material. Because tools including ChatGPT simplify repetitive work, efficiency gains emerge across fast-paced departments. While automation reshapes daily workflows, practical advantages become visible where speed matters most. 

Still, quick uptake of artificial intelligence brings fresh risks to digital security. More staff now introduce personal AI software at work, bypassing official organizational consent. Experts label this shift "shadow AI," meaning unapproved systems run inside business environments. 

These tools handle internal information unseen by IT teams. Oversight gaps grow when such platforms function outside monitored channels. Almost three out of four people using artificial intelligence at work introduce outside tools without approval. 

Meanwhile, close to half rely on personal accounts instead of official platforms when working with generative models. Security groups often remain unaware - this gap leaves sensitive information exposed. What stands out most is the nature of details staff share with artificial intelligence platforms. Because generative models depend on what users feed them, workers frequently insert written content, programming scripts, or files straight into the interface. 

Often, such inputs include sensitive company records, proprietary knowledge, personal client data, sometimes segments of private software code. Almost every worker - around 93 percent - has fed work details into unofficial AI systems, according to research. Confidential client material made its way into those inputs, admitted roughly a third of them. 

After such data lands on external servers, companies often lose influence over storage methods, handling practices, or future applications. One real event showed just how fast things can go wrong. Back in 2023, workers at Samsung shared private code along with confidential meeting details by sending them into ChatGPT. That slip revealed data meant to stay inside the company. 

What slipped out was not hacked - just handed over during routine work. Without strong rules in place, such tools become quiet exits for secrets. Trusting outside software too quickly opens gaps even careful firms miss. Compromised AI accounts might not only leak data - security specialists stress they may also unlock wider company networks through exposed chat logs. 

While financial firms worry about breaking GDPR rules, hospitals fear HIPAA violations when staff misuse artificial intelligence tools unexpectedly. One slip with these systems can trigger audits far beyond IT departments’ control. Bypassing restrictions tends to happen anyway, even when companies try to ban AI outright. 

Experts argue complete blocks usually fail because staff seek workarounds if they think a tool helps them get things done faster. Organizations might shift attention toward AI oversight methods that reveal how these tools get applied across teams. 

By watching how systems are accessed, spotting unapproved software, clarity often emerges around acceptable use. Clear rules tend to appear more effective when risk control matters - especially if workers continue using innovative tools quietly. Guidance like this supports balance: safety improves without blocking progress.

Rust-Based VENON Malware Targets 33 Brazilian Banks

 


A newly identified banking malware strain called VENON is targeting users in Brazil and stands out for an unusual technical choice. Instead of relying on the Delphi programming language used by many long-running Latin American banking trojans, the new threat is written in Rust, a modern systems language that is increasingly appearing in intricately executed cyber operations.

The malware infects Windows machines and was first detected in February 2026. Researchers at the Brazilian cybersecurity firm ZenoX assigned the malware the name VENON after analyzing the threat.

Although it is written in a different programming language, the malware behaves similarly to several well-known banking trojans that have historically targeted financial institutions in Latin America. Analysts say the threat shares operational patterns with malware families such as Grandoreiro, Mekotio, and Coyote. These similarities include techniques like monitoring the active window on a victim’s computer, launching fake login overlays when banking applications open, and hijacking Windows shortcut files to redirect users.

At the moment, investigators have not linked VENON to any previously identified cybercriminal operation. However, forensic examination of an earlier version of the malware dating back to January 2026 revealed traces from the developer’s workstation. File paths embedded in the code repeatedly referenced a Windows user account named “byst4,” which may indicate the environment used during development.

Researchers believe the developer appears to be familiar with how Latin American banking trojans typically operate. However, the implementation in Rust suggests a higher level of technical expertise compared with many traditional banking malware campaigns. Analysts also noted that generative artificial intelligence tools may have been used to help reproduce and expand existing malware capabilities while rewriting them in Rust.

The infection process relies on a multi-stage delivery chain designed to avoid detection. VENON is executed through a technique known as DLL side-loading, where a malicious dynamic-link library runs when a legitimate application loads it. Investigators suspect the campaign may rely on social-engineering tactics similar to the ClickFix method. In this scenario, victims are persuaded to download a ZIP archive that contains the malicious components. A PowerShell script within the archive then launches the malware.

Before performing any harmful actions, the malicious DLL runs several checks designed to evade security tools. Researchers documented nine separate evasion methods. These include detecting whether the malware is running inside a security sandbox, using indirect system calls to avoid monitoring, and bypassing both Event Tracing for Windows (ETW) logging and the Antimalware Scan Interface (AMSI).

After completing these checks, the malware contacts a configuration file hosted on Google Cloud Storage. It then installs a scheduled task on the compromised machine to maintain persistence and establishes a WebSocket connection with a command-and-control server operated by the attackers.

Investigators also identified two Visual Basic Script components embedded in the DLL. These scripts implement a shortcut hijacking mechanism aimed specifically at the Itaú banking application. The technique replaces legitimate shortcuts with manipulated versions that redirect victims to a fraudulent webpage controlled by the threat actor.

The malware even includes an uninstall routine that can reverse these shortcut changes. This feature allows operators to restore the original system configuration, which could help remove evidence of the compromise after an attack.

VENON is configured to monitor activity related to 33 financial institutions and cryptocurrency services. The malware constantly checks the titles of open windows and the domains visited in web browsers. It activates only when a user accesses one of the targeted banking platforms. When triggered, the malware displays fake login overlays designed to capture credentials.

The discovery comes amid a broader wave of campaigns targeting Brazilian users through messaging platforms. Researchers recently observed threat actors exploiting the widespread popularity of WhatsApp in the country to spread a worm known as SORVEPOTEL. The worm spreads through the desktop web version of the messaging service by abusing already authenticated chat sessions to send malicious messages directly to contacts.

According to analysts at Blackpoint Cyber, a single malicious message sent from a compromised SORVEPOTEL session can initiate a multi-stage infection chain. In one observed scenario, the attack eventually deployed the Astaroth threat entirely in system memory.

The researchers noted that the combination of local automation tools, browser drivers operating without supervision, and runtime environments that allow users to write files locally created an environment that made it easier for both the worm and the final malware payload to install themselves with minimal resistance.

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

 

Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by Microsoft, pulls details from machines without manual input. Its purpose? Keeping systems stable, secure, running smoothly. Yet reactions split sharply between everyday users and those watching privacy trends. 

Early on, after Windows 10 arrived, observers questioned whether its telemetry might double as monitoring. A few writers argued it collected large amounts of user detail while transmitting data to Microsoft machines. Still, analysts inspecting how the OS handles information report minimal proof backing such suspicions. 

Beginning in 2017, scrutiny from the Dutch Data Protection Authority revealed shortcomings in how Windows presented telemetry consent choices. Although designed to gather system performance details, the setup failed to align with regional privacy expectations due to unclear user permissions. 

Instead of defending the original design, Microsoft adjusted both interface wording and backend configurations. Following these updates, oversight bodies acknowledged improvements, noting no evidence emerged suggesting private information was gathered unlawfully. Independent analysts alongside regulatory teams had previously flagged the configuration, yet after revisions, compliance concerns faded gradually. 

What runs behind the scenes in Windows includes a mix of telemetry types - mainly split into essential and extra reporting layers. Most personal computers, especially those outside corporate control, turn on the basic tier automatically; there exists no standard menu option to switch it off entirely. This baseline layer gathers only what Microsoft claims is vital for stability and core operations. 

Though hidden from typical adjustments, its presence supports ongoing performance checks across devices. Basic troubleshooting relies on specific diagnostics tied to functions like Windows Update. Information might cover simple fault summaries, setup traits of hardware, software plus driver footprints, along with records tracking how updates succeed or fail. 

As noted by Microsoft, insights drawn support better stability fixes, safety patches, app alignment, and smoother running systems. Some diagnostic details go beyond basics, capturing patterns in app use or web habits. These insights might involve deeper system errors, performance signs, or hardware traits. 

While such data helps refine functionality, access remains under user control via Windows options. Those cautious about personal information often choose to turn this off. Control sits within settings, letting choices match comfort levels. Occasionally, memory dumps taken during system failures form part of Optional diagnostic data, according to experts. 

When a crash happens, pieces of active files might get saved inside these records. Because of this risk, certain groups managing confidential material prefer disabling the setting altogether. In 2018, Microsoft rolled out a feature named Diagnostic Data Viewer to boost openness. This tool gives people access to review what information their machine shares with the company, revealing specifics found in diagnostics and system summaries. 

One billion devices now operate on Windows 11 across the globe. Because of countless variations in hardware and software setups, Microsoft relies on telemetry data - this information reveals issues, shapes update improvements, yet supports consistent performance. While tracking user interactions might sound intrusive, it actually guides fixes without exposing personal details; instead, patterns emerge that steer engineering decisions behind the scenes. 

Even though some diagnostic details are essential for basic operations, those worried about personal data might choose to limit what gets sent by turning off non-essential diagnostics in device preferences. Still, full function depends on keeping certain reporting active.

Featured