Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Vulnerability management. Show all posts

Uffizi Cyber Incident Serves as a Warning for Europe’s Cultural Sector

 


The cyber intrusion at the Uffizi Galleries in early 2026 has quickly evolved from an isolated security lapse into a case study of systemic digital exposure within Europe’s cultural infrastructure. One of the continent’s most prestigious custodians of artistic heritage, the institution disclosed that attackers succeeded in extracting its photographic archive an asset of both scholarly and operational value before containment measures were enacted.

Although restoration from secured backups ensured continuity of operations, the incident has sharpened attention on how legacy systems, often peripheral to core modernization efforts, can quietly become high-risk vectors within otherwise well-defended environments. Subsequent forensic assessments indicate that the breach was neither abrupt nor opportunistic.

Investigative timelines trace initial compromise activity as far back as August 2025, suggesting a calculated persistence campaign rather than a single-point intrusion. The suspected entry vector was an overlooked software component responsible for handling low-resolution image flows on the museum’s public-facing infrastructure an element deemed non-critical and therefore excluded from rigorous patch cycles. This miscalculation enabled attackers to establish a stable foothold, from which they executed disciplined lateral movement across interconnected systems spanning the Uffizi complex, including Palazzo Pitti and the Boboli Gardens.

Operating under a low-and-slow exfiltration model, the actors deliberately avoided triggering conventional detection thresholds, transferring data incrementally over several months. By the time administrative servers exhibited disruption, the extraction phase had largely concluded underscoring a level of operational maturity that challenges traditional assumptions about breach visibility and response timelines. 

Beyond its digital architecture, the Uffizi Galleries safeguards some of Italy’s most iconic works, including The Birth of Venus and Primavera by Sandro Botticelli, alongside Doni Tondo by Michelangelo a cultural weight that amplifies the implications of any security compromise. 

Institutional statements have sought to contextualize the operational impact, indicating that service disruption was limited to the restoration window required for backup recovery, with public disclosure issued post-incident in line with internal verification protocols. 

Reports circulating in Italian media suggested that threat actors had extended their reach across interconnected sites, including Palazzo Pitti and the Boboli Gardens, briefly asserting control over the photographic server and issuing a ransom demand directly to director Simone Verde. 

However, the institution maintains that comprehensive backups remained intact and that parallel developments such as restricted access to sections of Palazzo Pitti and the temporary relocation of select valuables to the Bank of Italy were pre-scheduled measures linked to ongoing renovation cycles rather than reactive security responses.

Similarly, the transition from analogue to digital surveillance infrastructure, initially recommended by law enforcement in 2024, was accelerated within a broader risk recalibration framework influenced in part by high-profile incidents such as the Louvre Museum theft case. 

The convergence of these events including the recent theft of works by Pierre-Auguste Renoir, Paul Cézanne and Henri Matisse from a northern Italian museum reinforces a broader pattern in which physical and cyber threats are increasingly intersecting, demanding integrated security postures across Europe’s cultural institutions. 

The reference to the Louvre Museum is neither incidental nor rhetorical. On 19 October 2025, a highly coordinated physical breach exposed critical lapses in on-site security when individuals, posing as construction workers, accessed restricted areas via a freight lift, breached a second-floor entry point, and removed multiple pieces of the French Crown Jewels within minutes.

Subsequent findings from a Senate-level inquiry pointed to systemic deficiencies, including limited CCTV coverage across exhibition spaces, misaligned external surveillance equipment, and fundamentally weak access controls at the credential level. The incident, which ultimately led to the resignation of director Laurence des Cars in February 2026, remains unresolved, with the stolen artefacts yet to be recovered. 

Against this backdrop, the distinction drawn by the Uffizi Galleries becomes materially significant. Unlike the Louvre breach, the Uffizi incident remained confined to the digital domain, with no evidence of physical intrusion or compromise of exhibition assets. 

Public-facing operations, including ticketing systems and visitor access, continued uninterrupted, with the only measurable impact attributed to backend restoration processes following data recovery. Amid intensifying scrutiny, conflicting narratives have emerged regarding the scope of data exposure. 

Reporting referenced by Cybernews, citing local sources including Corriere della Sera, alleged that attackers exfiltrated operationally sensitive artefacts ranging from authentication credentials and alarm configurations to internal layouts and surveillance telemetry before issuing a ransom demand.

The Uffizi Galleries has firmly contested these assertions, maintaining that forensic validation has yielded no evidence supporting the compromise of architectural maps or restricted security schematics, and emphasizing that certain observational elements, such as camera placement, remain inherently visible within public-facing environments. 

From a technical standpoint, the institution reiterated that core security systems are logically segregated and not externally addressable, limiting the feasibility of direct remote extraction as described. While investigations indicate that threat actors may have leveraged interconnected endpoints—including workstation nodes and peripheral devices to incrementally profile the environment, officials stress that no physical assets were impacted and no confirmed data misuse has been established. 

The ransom communication, reportedly directed to director Simone Verde with threats of dark web exposure, further underscores the psychological dimension often accompanying such campaigns. Notably, precautionary measures observed in parallel such as temporary gallery closures and the transfer of select holdings to the Bank of Italy have been attributed to pre-existing operational planning rather than reactive containment. 

In the broader context of heightened sectoral vigilance following incidents like the breach-linked vulnerabilities exposed at the Louvre Museum, the Uffizi has accelerated its transition from analogue to digital surveillance infrastructure, aligning with law enforcement recommendations issued in 2024. 

In its final clarification, the Uffizi Galleries moved to separate speculation from confirmed facts. While it did not deny that some valuables had been temporarily moved to a secure vault at the Bank of Italy, officials stressed that this step was part of planned renovation work, not a response to the cyber incident.

Reports from Corriere della Sera about sealed doors and restricted staff communication were also addressed, with the museum explaining that certain closures were linked to long-pending fire safety compliance and structural adjustments required for a historic building of its age. 

On the technical front, the Uffizi confirmed that its photographic archive remained safe, clarifying that although the server had been taken offline, it was done to restore data from backups a process now completed without any loss.

Despite the attention surrounding the breach, the museum continues to function normally, with visitor areas and ticketing operations unaffected, underlining how effective backup systems and planning helped limit real-world impact.

Over 1 Billion Users Potentially Impacted by Microsoft Zero Day Exposure


 

Informally known as BlueHammer, a newly discovered Windows zero-day vulnerability has drawn attention to the cybersecurity community because of its ability to quietly hand over control to attackers. As privilege escalation flaws are not uncommon, this particular vulnerability is noteworthy because of its ability to bridge the gap between restricted access and total system control so efficiently. 

A malicious adversary who has already gained access to a device may leverage this flaw to elevate privileges to NT AUTHORITY/SYSTEM, effectively bypassing the core safeguards designed to keep damage at bay. Additionally, an exploit code that was fully functional and disclosed by a security researcher on April 3, which had not been made available for official remediation or defensive guidance, further aggravated the situation. 

The lack of a CVE, no patch, and the minimal acknowledgement from Microsoft so far indicate that BlueHammer has created a volatile window of exposure which leaves defenders without clear direction. On the other hand, threat actors face considerably lowered barriers to exploitation. 

In addition to the previous analysis, BlueHammer was found to operate as a sophisticated local privilege escalation chain integrated within the Windows Defender signature update process, rather than exploiting traditional memory safety flaws by abusing trusted system components. To trigger a race condition between the time of check and the time of use, a coordinated interaction between the Volume Shadow Copy Service, Cloud Files API, and opportunistic locking mechanisms is orchestrated. 

Using file state transition manipulations during signature updates, the exploit can access protected resources without requiring kernel-level vulnerabilities or elevated privileges. After execution, the exploit extracts the Security Account Manager database using a Volume Shadow Copy snapshot, revealing the password hashes of local accounts corresponding to the NTLM protocol. 

By utilizing these credentials, an administrator can assume administrative control, which leads to the launch of a shell in SYSTEM context. It is noteworthy that the exploit incorporates a cleaning routine that reverts back to the original password hash after execution, which minimizes the likelihood of immediate detection and complicates forensic analysis. Independent validations have confirmed the threat's credibility. The exploit chain, despite minor reliability issues in the initial proof-of-concept, is functionally sound once corrected, according to Will Dormann, Tharros' principal vulnerability analyst. 

Other researchers have demonstrated successful end-to-end compromises in subsequent tests, demonstrating that operational barriers are lowering quickly. This risk profile is heightened by the fact that there is no available patch, which leaves organizations without a direct method of remediation, and by the fact that exploit code has been published to the public, which historically accelerates the adoption of ransomware and advanced persistent threat attacks. 

In addition to standard user-level access, slightly outdated Defender signatures are required for the attack to occur, lowering the entry threshold. Further, the exploit is constructed from a series of independent primitives that can be used again after targeted fixes have been introduced, indicating a longer-term impact beyond a single vulnerability cycle. Additionally, the circumstances surrounding the disclosure have attracted public attention. 

The exploit was released publicly by a researcher operating under the alias Chaotic Eclipse, who expressed dissatisfaction with Microsoft's handling of the problem. It is evident from the accompanying statements that both frustration and intent were evident, as the researcher declined to provide detailed technical explanations but implied that experienced practitioners would be able to grasp the underlying mechanics quickly. 

Although the original codebase contained bugs affecting stability, these limitations have been addressed within the research community already. Due to these developments, what began as a partially functional demonstration has quickly evolved into a reproducible attack path, reinforcing concerns that BlueHammer may be able to go from a proof-of-concept to an active exploitation scenario for real environments. 

According to emerging details surrounding the disclosure, Microsoft had already been informed of the BlueHammer vulnerability, however, unresolved concerns in the handling process appeared to have led the researcher to release the exploit publicly without having it assigned a formal CVE. It is clear that although the published proof-of-concept initially encountered minor implementation problems, it has since proven viable for practical use. 

During independent validation by Will Dormann, the exploit was confirmed to be reliable across a variety of environments, including Windows Server deployments, where it achieved administrative control even when full SYSTEM privileges were not consistently acquired.

Using technical refinements from Cyderes' Howler Cell team, the exploit chain was executed completely after addressing the PoC inconsistencies, emphasizing the rapid decline of operational barriers associated with the exploit. It is designed to manipulate Microsoft Defender to generate a Volume Shadow Copy, and then strategically interrupt that process at a specific execution point so that sensitive registry data can be accessed before cleanup routines are activated.

Through this controlled interruption, NTLM password hashes associated with local accounts may be extracted and decrypted, followed by unauthorized alteration of administrative credentials. By using token duplication techniques, the attacker inherits administrative security tokens, elevates them to SYSTEM integrity levels, and utilizes the Windows service creation mechanism to launch a secondary payload as a result of this compromise. 

As a result of this, an active user session is initiated by launching a command shell operating under the NT AUTHORITY/SYSTEM authority. As a means of obscuring evidence, the exploit then restores the original password hash, ensuring that user credentials remain unchanged while erasing immediate indicators of compromise. 

According to security practitioners, BlueHammer represents a broader class of exploitation in which unintended combinations of legitimate system features are combined with discrete software defects to create an exploit. 

Cyderes leadership has noted that the technique weaponizes Windows functionality in such a manner that it evades conventional detection logic, and current Defender signatures appear to identify only the binary originally published. It is possible to bypass these detections by simply modifying the codebase, retaining the underlying methodology in its original form. 

Due to the absence of vendor-provided patches, defensive efforts have shifted toward behavioral monitoring, such as abnormal interactions with Volume Shadow Copy mechanisms, irregular Cloud File API activity, and unexpected creations of Windows services originating from low-privileged contexts. 

A number of additional indicators indicate potential exploitation attempts, including transient changes to local administrator passwords followed by rapid restoration. There are no confirmed reports of active in-the-wild abuse at this point, however the public availability of the exploit dramatically reduces the timeline for potential weaponization.

In the past, ransomware groups and advanced threat actors have demonstrated the capability to operationalize these disclosures within days, often integrating them into more comprehensive intrusion frameworks. 

While the requirement for local access to the network at first is a constraint, it does not pose a significant barrier to determined adversaries, who routinely gain access through credential theft, phishing campaigns, or lateral movement within compromised networks. Thus, BlueHammer should be considered a proactive exposure window, not an isolated vulnerability, highlighting the risks inherent in complex system interactions as well as the challenges associated with defending against exploitation paths that do not rely on a single, easily remediable flaw to exploit.

In the absence of immediate remediation, a containment strategy and a reduction of exposure are necessary response strategies for BlueHammer. It is recommended that security teams prioritize environments where untrusted or potentially compromised code is already running, since vulnerabilities of this nature are most effective when they have established a solid foothold. It is possible to significantly reduce the available attack surface in the short term by enforcing least-privilege enforcement, eliminating unnecessary local administrative rights, and closely inspecting anomalous privilege escalation patterns. 

Detecting subtle indicators of post-compromise activity is also critical, including irregular access to sensitive account data, unexpected privilege transitions, and processes that deviate from baselines, which indicate that a compromise has occurred. Managing risk from a broader perspective requires a clear understanding of emerging vulnerabilities and exposed assets. 

As a result of context-driven approaches that correlate newly disclosed vulnerabilities with organizational infrastructure, remediation efforts can be prioritized where they have the greatest impact rather than applying uniform responses across all systems. There is a particular need for this in scenarios where there is no immediate vendor guidance available, requiring defenders to rely on situational awareness and adaptive monitoring strategies. 

Finally, BlueHammer illustrates how a vulnerability can quickly shift from controlled disclosure to operational risk if exploit code is available in the public domain before it is properly fixed. Response timelines are compressed by these conditions, and defenders are disadvantaged, even in the absence of widespread exploitation that has been confirmed. 

Furthermore, this underscores the persistent reality of Windows security: attackers are often not required to use sophisticated remote exploits to achieve meaningful compromise in Windows. If a limited foothold is combined with a reliable escalation path, it is sufficient to take full control of the system. 

However, when that pathway becomes public without mitigations, the risk profile increases dramatically, and affected organisms must maintain a disciplined defensive posture and maintain sustained attention. It emphasizes the importance of resilience when faced with incomplete information and delayed remediation as a result of BlueHammer. 

Organizations that prioritize proactive threat hunting, adhere to strict access controls, and continuously verify system behavior against expected norms are better prepared to mitigate emerging threats in such scenarios. For limiting the impact of evolving exploitation techniques, a multilayered defensive strategy incorporating visibility, control, and rapid response is necessary rather than only relying on vendor-driven fixes.

From Vulnerability Management to Preemptive Exposure Management

 

The traditional model of vulnerability management—“scan, wait, patch”—was built for an earlier era, but today’s attackers operate at machine speed, exploiting weaknesses within hours of disclosure through automation and AI-driven reconnaissance. The challenge is no longer about identifying vulnerabilities but fixing them quickly enough to stay ahead. While organizations discover thousands of exposures every month, only a fraction are remediated before adversaries take advantage.

Roi Cohen, co-founder and CEO of Vicarius, describes the answer as “preemptive exposure management,” a strategy that anticipates and neutralizes threats before they can be weaponized. “Preemptive exposure management shifts the model entirely,” he explains. “It means anticipating and neutralizing threats before they’re weaponized, not waiting for a CVE to be exploited before taking action.” This proactive model requires continuous visibility of assets, contextual scoring to highlight the most critical risks, and automation that compresses remediation timelines from weeks to minutes.

Michelle Abraham, research director for security and trust at IDC, notes the urgency of this shift. “Proactive security seems to have taken a back seat to reactive security at many organizations. IDC research highlights that few organizations track all their IT assets which is the critical first step towards visibility of the full digital estate. Once assets and exposures are identified, security teams are often overwhelmed by the volume of findings, underscoring the need for risk-based prioritization,” she says. Traditional severity scores such as CVSS do not account for real-world exploitability or the value of affected systems, which means organizations often miss what matters most. Cohen stresses that blending exploit intelligence, asset criticality, and business impact is essential to distinguish noise from genuine risk.

Abraham further points out that less than half of organizations currently use exposure prioritization algorithms, and siloed operations between security and IT create costly delays. “By integrating visibility, prioritization and remediation, organizations can streamline processes, reduce patching delays and fortify their defenses against evolving threats,” she explains.

Artificial intelligence adds another layer of complexity. Attackers are already using AI to scale phishing campaigns, evolve malware, and rapidly identify weaknesses, but defenders can also leverage AI to automate detection, intelligently prioritize threats, and generate remediation playbooks in real time. Cohen highlights its importance: “In a threat landscape that moves faster than any analyst can, remediation has to be autonomous, contextual and immediate and that’s what preemptive strategy delivers.”

Not everyone, however, is convinced. Richard Stiennon, chief research analyst at IT-Harvest, takes a more skeptical stance: “Most organizations have mature vulnerability management programs that have identified problems in critical systems that are years old. There is always some reason not to patch or otherwise fix a vulnerability. Sprinkling AI pixie dust on the problem will not make it go away. Even the best AI vulnerability discovery and remediation solution cannot overcome corporate lethargy.” His concerns highlight that culture and organizational behavior remain as critical as the technology itself.

Even with automation, trust issues persist. A single poorly executed patch can disrupt mission-critical operations, leading experts to recommend gradual adoption. Much like onboarding a new team member, automation should begin with low-risk actions, operate with guardrails, and build confidence over time as results prove consistent and reliable. Lawrence Pingree of Dispersive emphasizes prevention: “We have to be more preemptive in all activities, this even means the way that vendors build their backend signatures and systems to deliver prevention. Detection and response is failing us and we're being shot behind the line.”

Regulatory expectations are also evolving. Frameworks such as NIST CSF 2.0 and ISO 27001 increasingly measure how quickly vulnerabilities are remediated, not just whether they are logged. Compliance is becoming less about checklists and more about demonstrating speed and effectiveness with evidence to support it.

Experts broadly agree on what needs to change: unify detection, prioritization, and remediation workflows; automate obvious fixes while maintaining safeguards; prioritize vulnerabilities based on exploitability, asset value, and business impact; and apply runtime protections to reduce exposure during patching delays. Cohen sums it up directly: security teams don’t need to find more vulnerabilities—they need to shorten the gap between detection and mitigation. With attackers accelerating at machine speed, the only sustainable path forward is a preemptive strategy that blends automation, context, and human judgment.

Here's Why Ransomware Actors Have a Upper Hand Against Organisations

 

Successful ransomware assaults are increasing, not necessarily because the attacks are more sophisticated in design, but because attackers have found that many of the world's largest companies lack adequate resilience to basic safety measures. Despite huge efforts in cybersecurity from both the private and public sectors, many organisations remain vulnerable to ransomware attacks.

Richard Caralli, senior cybersecurity advisor at Axio, has over 40 years of experience as a practitioner, researcher, and leader in the audit and cybersecurity fields. Based on his years of experience, he believes that there are two primary reasons of the lack of ransomware resilience that exposes numerous organisations to otherwise preventable flaws in their ransomware defences: 

  • Recent noteworthy intrusions, such as those on gaming companies, consumer goods manufacturers, and healthcare providers, highlight the fact that some organisations may not have implemented basic safety standards. 
  • Organisations that have put in place foundational practices may not have done enough to confirm and validate those practices' performance over time, which causes expensive investments to lose their efficacy more quickly. 

Given this, organisations can take three simple activities to boost fundamental resilience to ransomware: 

Recommit to core practices

According to Verizon's "2023 Data Breach Investigations Report," 61% of all incidents used user credentials. Two-factor authentication (2FA) is currently regarded as an essential control for access management. However, a failure to apply this additional layer of security is at the heart of UnitedHealth Group/Change Healthcare's ongoing ransomware nightmare. This intrusion affects not only patients, but also service providers and professionals, who face severe barriers to obtaining treatment authorisations and payments. An entire sector is under attack as a result of a major healthcare provider's failure to adopt this foundational control.

Ensure fundamental procedures are institutionalised

There is a "set and forget" approach that handles cybersecurity during the installation stage but fails to ensure that procedures, controls, and countermeasures are long-lasting throughout the infrastructure's life, particularly when these infrastructures expand and adapt to organisational change. 

For example, cybersecurity procedures that are not actively adopted with characteristics that enable institutionalisation and durability are at risk of failing to withstand developing ransomware attack vectors. But what exactly does institutionalisation mean? Higher maturity behaviours include documenting the practice, resourcing it with sufficiently skilled and accountable people, tools, and funding, supporting its enforcement through policy, and measuring its effectiveness over time. 

Implementing the basics 

The issue of implementing and maintaining essential cybersecurity measures is numerous. It necessitates a commitment to constant attention, active management, and a thorough understanding of emerging hazards. However, by confronting these obstacles and ensuring that cybersecurity procedures are rigorously established, measured, and maintained, organisations may better protect themselves against the ever-present threat of ransomware attacks. 

Focussing on the basics first — such as implementing foundational controls like 2FA, developing maintenance skills to integrate IT and security efforts, and adopting performance management practices — can lead to significant improvements in cybersecurity, providing robust protection with less investment.

Xapo Bank Aims To Boost Bitcoin Safety With Tech And Bunkers

 

Satoshi Nakamoto, the pseudonymous developer of Bitcoin, published the system's whitepaper in 2008, bluntly criticising financial institutions and the confidence they demand. However, in 2010, one of the most notable Bitcoin collaborators in its early days and the recipient of the first Bitcoin transaction in history, cypherpunk and cryptography specialist Hal Finney, predicted the existence of bitcoin banks. Today, bitcoin-native banks such as Xapo Bank exist in this grey area between the ethos and the potential deployment of this system across the global financial sector. 

Finney claims that Xapo Bank, which was founded in 2013, is among the leaders in the custodial space of Bitcoin. Wences Casares, an Argentinean entrepreneur and innovator who is well-known in Silicon Valley for his support of this technology, developed it as a solution for his friends and family. However, it expanded significantly. Currently, it is one of the few fully licensed banks in the world that deals with Bitcoin and other digital assets. 

Its business idea combines cutting-edge Bitcoin technology with a physical bunker in the Swiss highlands. This physical location blends old-fashioned Swiss standards with the latest safety technology. It's an atomic bunker that serves as the foundation of what Xapo provides its clients: high-quality security for digital assets. Xapo is exploring new technical opportunities. The custody business is dominated by multi-signature solutions, but the greatest alternative and security solution for the Gibraltar-registered bitcoin bank is the multi-party computation protocol. On a broad level, MPC enables several parties to share information without fully exposing the shared data. 

In the case of Xapo, this works by breaking the digital asset master private key into several unique fragments known as "key shares," which Xapo Bank has stored and distributed in hidden places around the world, including the Swiss bunker. The MPC protocol ensures that participants' contributions remain private during key creation and signing, without being revealed. This functionality assures that no single participant in the quorum has total access to or control over the stored assets, reducing the chance of collusion to nearly zero. 

"MPC is a much more modern and secure setup compared to a still more popular multi-signature approach. The fact that the private key is not put together at any point in the transaction means there is no moment it can be potentially exposed or hacked, which is not the case with the more traditional multi-sig technology," Xapo Bank's Chief Technology Officer, Kamil DziubliÅ„ski, stated. 

However, there are threats and concerns, even with a movie-style bunker and this novel method of securing the keys and transaction signing process. Security threats include hacking and phishing attempts. Financial risks include money laundering, terrorist financing, and various types of financial attacks.

Microsoft Update Alert: 70% Of Windows Users Are Now At Risk

 

Microsoft's end-of-support date for Windows 10 is approaching on October 14, 2025, and the operating system is already facing a serious security threat. With 70% of Windows users still operating Windows 10, the situation in terms of cyber-attacks has become increasingly perilous. This security bug has major consequences for individuals and organisations who rely on Windows 10. 

What's happening?

A 2018 Windows flaw has been added to the US government's known exploited vulnerabilities (KEV) database, cautioning of potential privilege escalation assaults and remote code execution. Researchers believe that the vulnerability, CVE-2018-0824, was exploited by the Chinese hacker outfit APT41. This threat actor is supported by the Ministry of State Security and has a high level of seriousness because it targets both government and private organisations. 

The US government has warned people to fix or stop using Windows if there is any risk by August 26. If this is not done, users will remain vulnerable to assaults. This vulnerability will not affect Windows 11. Additionally, it would not harm updated Windows systems, emphasising the importance of upgrades for users. The warnings appear to be insufficient, as many users continue to use Windows 10, with only 30% having updated their systems to Windows 11. 

Furthermore, as the end-of-support date approaches, hundreds of scam emails are likely to target Windows 10 customers' inboxes. The hackers would take advantage of this situation and jeopardise the security of users' data and systems, resulting in data breaches and other serious consequences such as system compromise and financial losses. 

Take a look at Reddit or the comments on this post to see the enormous number of Windows users who are waiting for Microsoft to pull a late rabbit out of the bag and expand Windows 10 support. It is unclear how this will affect all those who have invested in upgrading. 

Given the recent experience, with global images of blue screens of death all around, come October, this could be a hackers' paradise for a while. Another aspect to consider is that malicious actors would take advantage of the situation and send out scam after scam to nervous Windows 10 users.

AI and Vulnerability Management: Industry Leaders Show Positive Signs

AI and Vulnerability Management: Industry Leaders Show Positive Signs

Positive trend: AI and vulnerability management

We are in a fast-paced industry, and with the rise of technological developments each day, the chances of cyber attacks always arise. Hence, defense against such attacks and cybersecurity becomes paramount. 

The latest research into the cybersecurity industry by Seemplicity revealed that 91% of participants claim their security budget is increasing this year. It shows us the growing importance of cybersecurity in organizations.

Understanding report: An insight into industry leaders' mindset

A survey of 300 US cybersecurity experts to understand views about breathing topics like automation, AI, regulatory compliance, vulnerability and exposure management. Organizations reported employing 38 cybersecurity vendors, highlighting sophisticated complexity and fragmentation levels within the attack surfaces. 

The fragmentation results in 51% of respondents feeling high levels of noise from the tools, feeling overwhelmed due to the traffic of notifications, alerts, and findings, most of which are not signaled anywhere. 

As a result, 85% of respondents need help with handling this noise. The most troubling challenge reported being slow or delayed risk reduction, highlighting the seriousness of the problem, because of the inundating noise slowing down effective vulnerability identification and therefore caused a delay in response to threats. 

Automation and vulnerability management on the rise

97% of respondents cited methods (at least one) to control noise, showing acceptance of the problem and urgency to resolve it. 97% showed some signs of automation, hinting at a growth toward recognizing the perks of automation in vulnerability and exposure management. The growing trend towards automation tells us one thing, there is a positive adoption response. 

However, 44% of respondents still rely on manual methods, a sign that there still exists a gap to full automation.

But the message is loud and clear, automation has helped in vulnerability and exposure management efficiency, as 89% of leaders report benefits, the top being a quicker response to emergency threats. 

AI: A weapon against cyber threats

The existing opinion (64%) that AI will be a key force against fighting cyber threats is a positive sign showing its potential to build robust cybersecurity infrastructure. However, there is also a major concern (68%) about the effects of integrating AI into software development on vulnerability and exposure management. AI will increase the pace of code development, and the security teams will find it difficult to catch up. 

Are We Ready For The Next Major Global IT Outage? Here's All You Need to Know

 

Last Friday, a glitch in the tech firm led to a global disruption impacting cross-sector activities. Hospitals, health clinics, and banks were impacted; airlines grounded their planes; broadcasting firms were unable to broadcast (Sky News went off the air); emergency numbers such as 911 in the United States were unavailable; and MDA experienced several troubles in Israel. 

This incident had a significant impact in the United States, Australia, and Europe. Critical infrastructure and many corporate operations were brought to a halt. In Israel, citizens instantly linked the incident to warfare, namely the UAV that arrived from Yemen and exploded in Tel Aviv, presuming that Iran was attacking in the cyber dimension. 

What exactly happened? 

CrowdStrike, an American firm based in Texas that provides a cybersecurity protection system deployed in several companies across the world, announced on Friday morning that there was a glitch with the most recent version of their system given to customers. The issue caused Microsoft's operating system, Windows, not to load, resulting in a blue screen. As a result, any organisational systems that were installed and based on that operating system failed to load. In other words, the organisation had been paralysed. 

But the trouble didn't end there. During the company's repair actions, hackers "jumped on the bandwagon," impersonating as staff members and giving instructions that essentially involved installing malicious code into the company and erasing its databases. This was the second part of the incident. 

Importance of risk management 

Risk management is an organisational discipline. Within risk management processes, the organisation finds out and maps the threat and vulnerability portfolio in its activities, while also developing effective responses and controls to threats and risks. Threats can be "internal," such as an employee's human error, embezzlement, or a technical failure in a computer or server. Threats can also arise "externally" to the organisation, such as consumer or supplier fraud, a cyberattack, geopolitical threats in general, particularly war, or a pandemic, fire, or earthquake. 

It appears that the world has become far more global and technological than humans like to imagine or believe. And, certainly, a keyboard error made by one individual in one organisation can have global consequences, affecting all of our daily lives. This is the fact, and we should recognise it as soon as possible and start preparing for future incidents through systematic risk management methods.

The Vital Role of Ethical Hacking in Cyber Security

 

The possibility of cyber attacks is a major issue, with the global average cost of a data breach expected to reach $4.45 million in 2023, a 15% increase over the previous three years, according to an IBM analysis. This stark figure highlights the growing financial and reputational threats companies face, emphasising the importance of ethical hacking in an increasingly interconnected world. 

Ethical hackers are the first line of defence, utilising their knowledge to replicate cyber attacks under controlled conditions. These individuals play an important role in averting potentially disastrous data breaches, financial loss, and reputational harm caused by cyber attacks by proactively fixing security vulnerabilities before they are exploited. 

This article explores the importance of ethical hacking, the tactics used by ethical hackers, and how to pursue a career in this vital sector of cyber security. 

What is ethical hacking? 

Ethical hacking, commonly referred to as penetration testing or white-hat hacking, is a technique for testing computer systems, networks, or online applications for security flaws. Unlike criminal hackers, who attempt to make money from vulnerabilities, ethical hackers utilise their expertise to uncover and patch them before they are exploited. 

They utilise their expertise with authorization, hoping to improve security posture before a real hacker exploits vulnerabilities. This preemptive strike against possible breaches is an important part of modern cyber security tactics and a technique of protecting against the most dangerous cyber security threats. Ethical hacking adheres to a fixed code of ethics and legal restrictions. 

Ethical hackers must have clear permission to explore systems and ensure that their actions do not stray into illegal territory. Respect for privacy, data integrity, and the lawful exploitation of uncovered vulnerabilities is critical. 

Methodologies of Ethical Hacking 

Ethical hackers employ a variety of methodologies to assess the security of information systems. These include: 

Risk assessment: Scanning systems and networks to identify known vulnerabilities. 

Penetration testing: Simulating cyber attacks to evaluate the effectiveness of security measures. 

Social engineering: Testing the human element of security through phishing simulations and other tactics. 

Security auditing: Examining the adherence of systems and policies to security standards and best practices. 

Process of ethical hacking

Step 1: Reconnaissance - The ethical hacker collects as much information about the target system or network as possible utilising techniques such as WHOIS databases, search engines, and social media to obtain publically available information. 
 
Step 2: Scanning – They look for live hosts, open ports, services running on those hosts, and vulnerabilities connected with them. Nmap may be used to scan ports, while Nessus or OpenVAS can be used to check for vulnerabilities that can be exploited. 

Step 3: Gaining Access – They use the identified vulnerabilities to gain unauthorised access to the system or network. Metasploit is commonly used to exploit vulnerabilities. Other tools include SQL injection tools for database attacks, as well as password cracking programmes such as John the Ripper or Hydra. 

Step 4: Maintaining Access – Ensure continued access to the target for further exploration and analysis without being detected. Tools like backdoors and trojans are used to maintain access, while ensuring to operate stealthily to avoid detection by security systems.

Step 5: Covering Tracks – Delete evidence of the hacking process to avoid detection by system administrators or security software. Log tampering and the use of tools to clear or modify entries in system logs. Tools such as CCleaner can also be used to erase footprints.

Chinese APT40 Can Exploit Flaws Within Hours of Public Release

 

A joint government advisory claims that APT40, a Chinese state-sponsored actor, is focusing on recently discovered software vulnerabilities in an attempt to exploit them in a matter of hours.

The advisory, authored by the Cybersecurity and Infrastructure Security Agency, FBI, and National Security Agency in the United States, as well as government agencies in Australia, the UK, Canada, New Zealand, Germany, South Korea, and Japan, stated that the cyber group has targeted organisations in a variety of arenas, employing techniques commonly employed by other state-sponsored actors in China. It has often targeted Australian networks, for instance, and remains a threat, the agencies warned. 

Rather than using strategies that involve user engagement, the gang seems to prefer exploiting vulnerable, public-facing infrastructure and prioritising the collection of valid credentials. It frequently latches on public exploits as soon as they become accessible, creating a "patching race" condition for organisations. 

"The focus on public-facing infrastructure is interesting. It shows they're looking for the path of least resistance; why bother with elaborate phishing campaigns when you can just hit exposed vulnerabilities directly?" stated Tal Mandel Bar, product manager at DoControl. 

The APT targets newly disclosed flaws, but it also has access to a large number of older exploits, according to the agencies. As a result, a comprehensive vulnerability management effort is necessary.

Comprehensive reconnaissance efforts 

APT40 conducts reconnaissance against networks of interest on a regular basis, "including networks in the authoring agencies' countries, looking for opportunities to compromise its targets," according to the joint advice. The group then employs Web shells for persistence and focuses on extracting data from sensitive repositories.

"The data stolen by APT40 serves dual purposes: It is used for state espionage and subsequently transferred to Chinese companies," Chris Grove, director of cybersecurity strategy at Nozomi Networks, stated. "Organizations with critical data or operations should take these government warnings seriously and strengthen their defenses accordingly. One capability that assists defenders in hunting down these types of threats is advanced anomaly detection systems, acting as intrusion detection for attackers able to 'live off the land' and avoid deploying malware that would reveal their presence.” 

APT40's methods have also advanced, with the group now adopting the use of compromised endpoints such as small-office/home-office (SOHO) devices for operations, allowing security agencies to better track it. Volt Typhoon's noted approach is just one of many parts of the group's operation that are comparable to other China-backed threat groups including Kryptonite Panda, Gingham Typhoon, Leviathan, and Bronze Mohawk, the advisory reads. 

The advisory provides mitigating approaches for APT40's four major types of tactics, techniques, and procedures (TTPs), which include initial access, execution, persistence, and privilege escalation.

A World of Novel Risks: Stress-Testing Security Assumptions

 

The most severe security failures are generally those that we cannot anticipate – until they occur. Prior to 9/11, national security and law enforcement planners expected that airline hijackers would land their planes and reach a settlement — until they didn't. Prior to Stuxnet, control system engineers felt that air-gapped systems could work without interference—until a virus was installed. Prior to the SolarWinds breach discovery in 2020, IT managers believed that verified updates to a trusted network management platform were legal and safe—until the platform itself became the target of a devastating supply chain attack. 

The severity of injury caused by these accidents is often determined by the extent to which novel risks were unforeseen, or assumed not to be threats in the first place. In other words, the more basic the assumption, the more harmful the compromise. The objective of security is to be safe not only now, but also in the future, anticipating and mitigating threats that might arise at a later time and place through adequate preparation and security. And the assumptions we make about the future environment form the basis for that work. Assumptions are required for any security strategy to be cohesive. But they have a shelf life. 

It's doubtful that our presumptions from now will be true later on. We understand that growing interdependencies would inevitably lead to cross-domain and cross-disciplinary security concerns. We are aware that the endless cycles of discovery and patch, identify and neutralise, and detect and respond will be even more difficult to maintain than they are now due to the pace of change brought on by the rate of technological advancement. 

Adopting a future-resilience approach 

Recognising the shifting situation, we have endeavoured to speed this process by collecting and sharing more data, gaining deeper insights from more powerful analytics, detecting threat actors and their behaviours earlier, and responding faster to ongoing attacks. 

But we're falling further behind. It is too late to understand a threat actor's aims and attack methods, let alone identify their moves. The primary challenge is to plan for a future with an unknown risk profile. To become more resilient in a world of "unseen until it's too late" challenges, we must tighten our strategies and stress-test our assumptions. The future of security will be about resilience in the face of unknown hazards. Monitoring trends and anticipating threats is not sufficient. We must also reconsider the assumptions that support our current sense of security. 

A new, future-resilient strategy will need to incorporate a purposeful process of challenging existing assumptions while they are still relevant in order to predict a future in which those assumptions are undermined. Then, based on this new future "reality," we can devise strategies for survival. In other words, we move away from assessing the current environment, making assumptions about the future, identifying threats, and then mitigating those risks, and towards explicitly identifying our assumptions, "making up" threats to undermine those assumptions, and building resilience to survive that future.

Five Challenges to Adoption of Liquid Cooling in Data Centers

 

Data centre liquid cooling systems are becoming increasingly popular due to their greater heat management effectiveness when compared to traditional air cooling methods. However, as technology advances, new security issues emerge, such as cybersecurity and physical risks. 

These concerns are critical to industry professionals as they can result in data breaches, system disruptions, and considerable operational downtime. Understanding and minimising these risks ensures that a data centre is reliable and secure. This method emphasises the significance of a comprehensive approach to digital and physical security in the changing landscape of data centre cooling technology. 

But the transition from air to liquid is not easy. Here are some of the main challenges to the implementation of liquid cooling in data centres: 

Two cooling systems instead of one

It is rarely practical for an established data centre to switch to liquid cooling one rack at a time. The facilities personnel will have to operate two cooling systems rather than one, according to Lex Coors, chief data centre technology and engineering officer of Interxion, the European colocation behemoth. This makes liquid cooling a better option for new data centres or those in need of a major overhaul. 

No standards 

The lack of industry standards for liquid cooling is a significant barrier to widespread use of the technology. "The customer, first of all, has to come with their own IT equipment ready for liquid cooling," Coors stated. "And it's not very standardized -- we can't simply connect it and let it run.” Interxion does not currently have consumers using liquid cooling, but the company is prepared to support it if necessary, according to Coors. 

Corrosion

Corrosion is a challenge in liquid cooling, as it is in any system that uses water to flow through pipes. "Corrosion in those small pipes is a big issue, and this is one of the things we are trying to solve today," Mr. Coors added. Manufacturers are improving pipelines to reduce the possibility of leaks and to automatically close if one occurs. 

Physical security 

Physical tampering with data centre liquid cooling systems poses serious security threats since unauthorised modifications can disrupt operations and jeopardise system integrity. Malicious insiders, such as disgruntled or contractors, can use their physical access to change settings, introduce contaminants, or disable cooling devices. 

Such acts can cause overheating, device failures, and protracted downtime, compromising data centre performance and security. Insider threats highlight the importance of rigorous access controls, extensive background checks, and ongoing monitoring of personnel activities. These elements help to prevent and respond promptly to physical sabotage. 

Operational complexity 

The company that offers colocation and cloud computing services, Markley Group, plans to implement liquid cooling in a high-performance cloud data centre early next year. According to Jeff Flanagan, executive VP of Markley Group, the biggest risk could be increased operational complexity. 

"As a data center operator, we prefer simplicity," he said. "The more components you have, the more likely you are to have failure. When you have chip cooling, with water going to every CPU or GPU in a server, you're adding a lot of components to the process, which increases the potential likelihood of failure.”

Cisco Firepower Management Center Impacted By a High-Severity Vulnerability

 

Cisco addressed a flaw in the web-based management interface of the Firepower Management Centre (FMC) Software, identified as CVE-2024-20360 (CVSS score 8.8). 

The vulnerability is a SQL injection bug; an intruder can use it to acquire any data from the database, run arbitrary commands on the underlying operating system, and elevate privileges to root. The attacker can only exploit this flaw if they have at least Read Only user privileges. 

“A vulnerability in the web-based management interface of Cisco Firepower Management Center (FMC) Software could allow an authenticated, remote attacker to conduct SQL injection attacks on an affected system.” reads the advisory. “This vulnerability exists because the web-based management interface does not adequately validate user input. An attacker could exploit this vulnerability by authenticating to the application and sending crafted SQL queries to an affected system.” 

“A successful exploit could allow the attacker to obtain any data from the database, execute arbitrary commands on the underlying operating system, and elevate privileges to root. To exploit this vulnerability, an attacker would need at least Read Only user credentials,” the advisory adds. 

According to Cisco, there isn't a fix for this vulnerability. The IT giant confirmed that neither Firepower Threat Defence (FTD) nor Adaptive Security Appliance (ASA) software is impacted by this security vulnerability. The attacks that are taking advantage of this vulnerability in the wild are unknown to the Cisco Product Security Incident Response Team (PSIRT). 

Security patch 

Cisco has published free software upgrades to address the vulnerability stated in the advisory. Customers with service contracts that include regular software updates should receive security fixes through their usual update channels. Customers can only install and get support for software versions and feature sets for which they have acquired a licence. Customers agree to abide by the terms and conditions of the Cisco software licence while installing, downloading, accessing, or using such software upgrades. 

Furthermore, customers may only download software for which they have a valid licence, either directly from Cisco or through a Cisco authorised reseller or partner. In most cases, this will be a maintenance upgrade for already purchased software. Customers that receive free security software updates are not entitled to a new software licence, additional software feature sets, or significant revision upgrades.

Deepfakes and AI’s New Threat to Cyber Security

 

With its potential to manipulate reality, violate privacy, and facilitate crimes like fraud and character assassination, deepfake technology presents significant risks to celebrities, prominent individuals, and the general public. This article analyses recent incidents which bring such risks to light, stressing the importance of vigilance and preventative steps.

In an age where technology has advanced at an unprecedented rate, the introduction of deepfake technologies, such as stable diffusion software, presents a serious and concerning threat. This software, which was previously only available to trained experts, is now shockingly accessible to the general public, creating severe issues about privacy, security, and the integrity of digital content.

The alarming ease with which steady diffusion software can be downloaded and used has opened a Pandora's box of possible abuse. With a few clicks, anyone with basic technological knowledge can access these tools, which can generate hyper-realistic deepfakes. This programme, which employs sophisticated artificial intelligence algorithms, can modify photographs and videos to the point that the generated content appears astonishingly real, blurring the line between truth and deception. 

This ease of access significantly reduces the barrier to entry for developing deepfakes, democratising a technology that was previously only available to individuals with significant computational resources and technical experience. Anyone with a simple computer and internet access can now enjoy the benefits of dependable diffusion software. This development has significant ramifications for personal privacy and security. It raises serious concerns about the potential for abuse, particularly against prominent figures, celebrities, and high-net-worth individuals, who are frequently the target of such malicious activity. Rise in incidents 

Targeting different sectors 

Deepfakes: According to the World Economic Forum, the number of deepfake videos online has increased by an astonishing 900% every year. The surge in cases of harassment, revenge, and crypto frauds highlights an increasing threat to everyone, especially those in the public eye or with significant assets. 

Elon Musk impersonation: In one noteworthy case, scammers used a deepfake video of Elon Musk to promote a fraudulent cryptocurrency scheme, causing large financial losses for people misled by the hoax. This instance highlights the potential for deepfakes to be utilised in sophisticated financial crimes against naïve investors.

Targeting organisations: Deepfakes offer a significant threat to organisations, with reports of extortion, blackmail, and industrial espionage. A prominent case involves fraudsters tricking a bank manager in the UAE with a voice deepfake, resulting in a $35 million robbery. In another case, scammers used a deepfake to deceive Binance, a large cryptocurrency platform, during an online encounter.

Conclusion 

The incidents mentioned above highlight the critical need for safeguards against deepfake technology. This is where services like Loti come in, providing tools to detect and counteract unauthorised usage of a person's image or voice. Celebrities, high-net-worth individuals, and corporations use such safeguards to protect not only their privacy and reputation, but also against potential financial and emotional harm.

Finally, as deepfake technology evolves and presents new issues, proactive measures and increased knowledge can help reduce its risks. Companies like Loti provide a significant resource in this continuous battle, helping to maintain personal and professional integrity in the digital age.

Here's Why Tokens Are Like Treasure for Opportunistic Attackers

 

Authentication tokens are not tangible tokens, of course. However, if these digital IDs are not routinely expired or restricted to a single device, they may be worth millions of dollars in the hands of threat actors.

Authentication tokens ( commonly called "session tokens") play a vital role in cybersecurity. They encapsulate login authorization data, allowing for app validations and safe, authenticated logins to networks, SaaS applications, cloud computing, and identity provider (IdP) systems, as well as single sign-on (SSO) enabling ubiquitous corporate system access. This means that everyone holding a token has a gold key to company systems without having to complete a multifactor authentication (MFA) challenge. 

Drawbacks of employee convenience

The lifetime of a token is frequently used to achieve a balance between security and employee convenience, allowing users to authenticate once and maintain persistent access to applications for a set period of time. The attackers are increasingly obtaining these tokens through adversary-in-the-middle (AitM) attacks, in which the hacker is positioned between the user and legitimate applications to steal credentials or tokens, as well as pass-the-cookie attacks, which steal session cookies stored on browsers. 

Personal devices comprise browser caches as well, but they are not subject to the same level of security as corporate systems. Threat actors can simply capture tokens from inadequately secured personal devices, making them more vulnerable. However, personal devices are frequently granted access to corporate SaaS apps, posing a risk to corporate networks. 

Once a threat actor secures a token, they get access to the user's rights and authorizations. If they have an IdP token, they can use the SSO features of all business applications that are integrated with the IdP without the need for an MFA challenge. If it is an admin-level credential with accompanying privileges, they have the ability to destroy systems, data, and backups. The longer the token remains active, the more they can access, steal, and damage. Furthermore, they can create new accounts that do not require the token for persisted network access. 

While frequent expiration of session tokens will not prevent these types of assaults, it will significantly reduce the risk footprint by limiting the window of opportunity for a token to work. Unfortunately, we often notice that tokens are not being expired at regular intervals, and some breach reports indicate that default token expirations are being purposely extended. 

Token attacks in the spotlight 

Last year, multiple breaches involving stolen authentication tokens made headlines. Two incidents involved hacked IdP tokens. According to Okta, threat actors were in their systems from September 28 to October 17 as a result of a compromised personal Gmail account. A saved password from the Gmail account was synchronised in the Chrome browser, granting access to a service account, most likely without MFA enforcement. 

Once inside the service account, threat actors were able to obtain additional customer session tokens from ServiceNow's HAR files. The hack ultimately impacted all Okta customer support users. 

Notably, on November 23, 2023, Cloudflare discovered a threat actor attacking its systems via session tokens obtained from the Okta hack. This suggests that these session tokens did not expire 30 to 60 days after the Okta breach – not as a usual course of business, and not in response to the breach.

In September 2023, Microsoft also announced that threat actors had gotten a consumer signing key from a Windows crash dump. They then exploited it to attack Exchange and Active Directory accounts by exploiting an undisclosed flaw that allowed business systems to accept session tokens signed with the consumer's signing key. This resulted in the theft of 60,000 US State Department emails. This hack may not have had the same impact if tokens had been more aggressively expired (or pinned).

China Caught Deploying Remote Access Trojan Tailored for FortiGate Devices

 

The Military Intelligence and Security Service (MIVD) of the Netherlands has issued a warning regarding the discovery of a new strain of malware believed to be orchestrated by the Chinese government. Named "Coathanger," this persistent and highly elusive malware has been identified as part of a broader political espionage agenda, targeting vulnerabilities in FortiGate devices.

In a recent advisory, MIVD disclosed that Coathanger was employed in espionage activities aimed at the Dutch Ministry of Defense (MOD) in 2023. Investigations into the breach revealed that the malware exploited a known flaw in FortiGate devices, specifically CVE-2022-42475.
Coathanger operates as a second-stage malware and does not exploit any novel vulnerabilities. 
Unlike some malware that relies on new, undisclosed vulnerabilities (zero-day exploits), Coathanger operates as a second-stage malware and does not exploit any novel vulnerabilities. However, the advisory emphasizes that it could potentially be used in conjunction with future vulnerabilities in FortiGate devices.

Described as stealthy and resilient, Coathanger evades detection by concealing itself through sophisticated methods, such as hooking system calls to evade detection. It possesses the capability to survive system reboots and firmware upgrades, making it particularly challenging to eradicate.

According to Dutch authorities, Coathanger is just one component of a larger-scale cyber espionage campaign orchestrated by Chinese state-sponsored threat actors. These actors target various internet-facing edge devices, including firewalls, VPN servers, and email servers.

The advisory issued by Dutch intelligence underscores the aggressive scanning tactics employed by Chinese threat actors, who actively seek out both disclosed and undisclosed vulnerabilities in edge devices. It warns of their rapid exploitation of vulnerabilities, sometimes within the same day they are made public.

Given the popularity of Fortinet devices as cyberattack targets, businesses are urged to prioritize patch management. Recent reports from Fortinet highlighted the discovery of two critical vulnerabilities in its FortiSIEM solution, emphasizing the importance of prompt patching.

To mitigate the risk posed by Coathanger and similar threats, intelligence analysts recommend conducting regular risk assessments on edge devices, restricting internet access on these devices, implementing scheduled logging analysis, and replacing any hardware that is no longer supported.