Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

New Ransomware Uses Trusted Drivers to Disable Security Defenses

  Security monitoring teams are tracking a new ransomware strain called Reynolds that merges system sabotage and file encryption into a sing...

All the recent news you need to know

Threat Actors Pose As Remote IT Workers on LinkedIn to Hack Companies


The IT workers related to the Democratic People's Republic of Korea (DPRK) are now applying for remote jobs using LinkedIn accounts of other individuals. This attack tactic is unique. 

According to the Security Alliance (SEAL) post on X, "These profiles often have verified workplace emails and identity badges, which DPRK operatives hope will make their fraudulent applications appear legitimate.”

The IT worker scare has been haunting the industry for a long time. It originates from North Korea, the threat actors pose as remote workers to get jobs in Western organizations and other places using fake identities. The scam is infamous as Wagemole, PurpleDelta, and Jasper Sleet. 

The end goal?

To make significant income to fund the country’s cyber espionage operations, weapons programs, and also conduct ransomware campaigns. 

In January, cybersecurity firm Silent Push said that the DPRK remote worker program is a “high-volume revenue engine" for the country, allowing the hackers to gain administrative access to secret codebases and also get the perks of corporate infrastructure.  

Once the threat actors get their salaries, DPRK IT workers send cryptocurrency via multiple money laundering techniques. 

Chain-hopping and/or token swapping are two ways that IT professionals and their money laundering colleagues sever the connection between the source and destination of payments on the chain. To make money tracking more difficult, they use smart contracts like bridge protocols and decentralized exchanges.

What should individuals do?

To escape the threat, users who think their identities are being stolen in fake job applications should post a warning on their social media and also report on official communication platforms. SEAL advises to always “validate that accounts listed by candidates are controlled by the email they provide. Simple checks like asking them to connect with you on LinkedIn will verify their ownership and control of the account.”

The news comes after the Norwegian Police Security (PST) released an advisory, claiming to be aware of "several cases" in the last 12 months in which IT worker schemes have affected Norwegian companies. 

PST reported last week that “businesses have been tricked into hiring what are likely North Korean IT workers in home office positions. The salary income North Korean employees receive through such positions probably goes to finance the country's weapons and nuclear weapons program.”

Federal Court Fines FIIG $2.5 Million for Major Cybersecurity Breaches; Schools Push Phone-Free Policies

 


Fixed income manager FIIG Securities has been ordered by the Federal Court to pay $2.5 million in penalties over serious cybersecurity shortcomings. The ruling follows findings that the firm failed to adequately safeguard client data over a four-year period, culminating in a significant cyberattack in 2023.

The breach impacted approximately 18,000 clients and resulted in the theft of around 385 gigabytes of sensitive data. Information exposed on the dark web included driver’s licences, passport details, bank account information and tax file numbers.

According to the court, between 13 March 2019 and 8 June 2023, FIIG failed to implement essential cybersecurity safeguards. These failures included insufficient allocation of financial and technological resources, lack of qualified cybersecurity personnel, absence of multi-factor authentication for remote access, weak password and privileged account controls, inadequate firewall and software configurations, and failure to conduct regular penetration testing and vulnerability scans.

The firm also lacked a structured software update process to address security vulnerabilities, did not have properly trained IT staff monitoring threat alerts, failed to provide mandatory cybersecurity awareness training to employees, and did not maintain or regularly test an appropriate cyber incident response plan.

In addition to the $2.5 million penalty, the court ordered FIIG to contribute $500,000 toward ASIC’s legal costs. The company must also undertake a compliance program, including appointing an independent expert to review and strengthen its cybersecurity and cyber resilience frameworks.

This marks the first instance in which the Federal Court has imposed civil penalties for cybersecurity breaches under general Australian Financial Services (AFS) licence obligations.

“FIIG admitted that it failed to comply with its AFS licence obligations and that adequate cyber security measures – suited to a firm of its size and the sensitivity of client data held – would have enabled it to detect and respond to the data breach sooner.

“It also admitted that complying with its own policies and procedures could have supported earlier detection and prevented some or all of the client information from being downloaded.”

ASIC deputy chair Sarah Court emphasised the regulator’s stance on cybersecurity compliance: “Cyber-attacks and data breaches are escalating in both scale and sophistication, and inadequate controls put clients and companies at real risk.

“ASIC expects financial services licensees to be on the front foot every day to protect their clients. FIIG wasn’t – and they put thousands of clients at risk.

“In this case, the consequences far exceeded what it would have cost FIIG to implement adequate controls in the first place.”

Responding to the ruling, FIIG stated: “FIIG accepts the Federal Court’s ruling related to a cybersecurity incident that occurred in 2023 and will comply with all obligations. We cooperated fully throughout the process and have continued to strengthen our systems, governance and controls. No client funds were impacted, and we remain focused on supporting our clients and maintaining the highest standards of information security.”

ASIC Steps Up Cyber Enforcement

The case underscores ASIC’s growing focus on cybersecurity enforcement within the financial services sector.

In July 2025, ASIC initiated civil proceedings against Fortnum Private Wealth Limited, alleging failures to appropriately manage and mitigate cybersecurity risks. Earlier, in May 2022, the Federal Court determined that AFS licensee RI Advice had breached its obligations by failing to maintain adequate risk management systems to address cybersecurity threats.

The Court stated: “Clients entrust licensees with sensitive and confidential information, and that trust carries clear responsibilities.”

In its 2026 key priorities document, ASIC identified cyberattacks, data breaches and weak operational resilience as major risks capable of undermining market integrity and harming consumers.

“Digitisation, legacy systems, reliance on third parties, and evolving threat actor capability continue to elevate cyber risk in ASIC’s view. ASIC is urging directors and financial services license holders to maintain robust risk management frameworks, test their operational resilience and crisis responses, and address vulnerabilities with their third-party service providers.”

Smartphone Restrictions Gain Momentum in Schools

Separately, debate over smartphone use in schools continues to intensify as institutions adopt phone-free policies to improve learning outcomes and student wellbeing.

Addressing concerns about the cost and necessity of phone restrictions, one advocate explained:

"Yes it can seem an expensive way of keeping phones out of schools, and some people question why they can't just insist phones remain in a student's bag," he explains.

"But smartphones create anxiety, fixation, and FOMO - a fear of missing out. The only way to genuinely allow children to concentrate in lessons, and to enjoy break time, is to lock them away."

Supporters argue that schools introducing phone-free systems have seen tangible improvements.

"There have been notable improvements in academic performance, and headteachers also report reductions in bullying," he explains.

Vale of York Academy implemented phone pouches in November. Headteacher Gillian Mills told the BBC:

"It's given us an extra level of confidence that students aren't having their learning interrupted.

"We're not seeing phone confiscations now, which took up time, or the arguments about handing phones over, but also teachers are saying that they are able to teach."

The political landscape is also responding. Conservative leader Kemi Badenoch has pledged to enforce a nationwide smartphone ban in schools if elected, while the Labour government has opted to leave decisions to headteachers and launched a consultation on limiting social media access for under-16s.

As part of broader measures, Ofsted will gain authority to assess school phone policies, with ministers signalling expectations that schools become “phone-free by default”.

Some parents, however, prefer their children to carry phones for safety during travel.

"The first week or so after we install the system is a nightmare," he adds. "Kids refuse, or try and break the pouches open. But once they realise no-one else has a phone, most of them embrace it as a kind of freedom."

The broader societal debate continues as smartphone use expands alongside social media and AI-driven content ecosystems.

"We're getting so many enquiries now. People want to ban phones at weddings, in theatres, and even on film sets," he says.

"Effectively carrying a computer around in your hand has many benefits, but smartphones also open us up to a lot of misdirection and misinformation.

"Enforcing a break, especially for young people, has so many positives, not least for their mental health."

Dugoni believes society may be approaching a critical moment:

"We're getting close to threatening the root of what makes us human, in terms of social interaction, critical thinking faculties, and developing the skills to operate in the modern world," he explains.

AI and Network Attacks Redefine Cybersecurity Risks on Safer Internet Day 2026

 

As Safer Internet Day 2026 approaches, expanding AI capabilities and a rise in network-based attacks are reshaping digital risk. Automated systems now drive both legitimate platforms and criminal activity, prompting leaders at Ping Identity, Cloudflare, KnowBe4, and WatchGuard to call for updated approaches to identity management, network security, and user education. Traditional defences are struggling against faster, more adaptive threats, pushing organisations to rethink protections across access, infrastructure, and human behaviour. While innovation delivers clear benefits, it also equips attackers with powerful tools, increasing risks for businesses, schools, and policymakers who fail to adapt.  

Ping Identity highlights a widening gap between legacy security models and modern AI operations. Systems designed for static environments are ill-suited to dynamic AI applications that operate independently and make real-time decisions. Alex Laurie, the company’s go-to-market CTO, explained that AI agents now behave like active users, initiating processes, accessing sensitive data, and choosing next steps without human prompts. Because their actions closely resemble those of real people, distinguishing between human and machine activity is increasingly difficult. Without proper oversight, these agents can introduce unpredictable risks and expand organisational attack surfaces. 

Laurie advocates moving beyond static credentials toward continuous, verified trust. Instead of assuming legitimacy after login, organisations should validate identity, intent, and context at every interaction. Access decisions must adapt in real time, guided by behaviour and current risk conditions. This approach enables AI innovation while protecting data and users in an environment filled with autonomous digital actors. 

Cloudflare also warns of AI’s dual-use nature. While it boosts efficiency, it accelerates cybercrime by making attacks faster, cheaper, and harder to detect. Pat Breen cited Australian data from 2024–25, when more than 1,200 cyber incidents required response, including a sharp rise in denial-of-service attacks. Such disruptions immediately impact essential services like healthcare, banking, education, transport, and government systems. Whether AI ultimately increases safety or risk depends on how quickly cyber defences evolve. 

KnowBe4’s Erich Kron stresses the importance of digital mindfulness as AI-generated content and deepfakes spread. Identifying fake content is no longer a technical skill but a basic life skill. Verifying information, protecting personal data, using strong authentication, and keeping software updated are critical habits for reducing harm. WatchGuard Technologies reports a shift away from malware toward network-focused attacks. 

Anthony Daniel notes that this trend reinforces the need for Zero Trust strategies that verify every connection. Safer Internet Day underscores that cybersecurity is a shared responsibility, strengthened through consistent, everyday actions.

Black Hat Researcher Proves Air Gaps Fail to Secure Data

 

Air gaps, long hailed as the ultimate defense for sensitive data, are under siege according to Black Hat researcher Mordechai Guri. In a compelling presentation, Guri demonstrated multiple innovative methods to exfiltrate information from supposedly isolated computers, shattering the myth of complete offline security. These techniques exploit everyday hardware components, proving that physical disconnection alone cannot guarantee protection in high-stakes environments like government and military networks.

Guri's BeatCoin malware turns computer speakers into covert transmitters, emitting near-ultrasonic sounds inaudible to humans but detectable by nearby smartphones up to 10 meters away. This allows private keys or other secrets to leak out effortlessly. Even disabling speakers fails, as Fansmitter modulates fan speeds to alter blade frequencies, creating acoustic signals receivable by listening devices within 8 meters. For scenarios without microphones, the Mosquito attack repurposes speakers as rudimentary microphones via GPIO manipulation, enabling ultrasonic data transmission between air-gapped machines.

Electromagnetic exploits further erode air-gap defenses. AirHopper manipulates monitor cables to radiate FM-band signals, capturable by a smartphone's built-in receiver. GSMem leverages CPU-RAM pathways to generate cellular-like transmissions detectable by basic feature phones, while USBee transforms USB ports into antennas for broad leakage. These methods highlight how standard peripherals become unwitting conduits for data escape.

Faraday cages, designed to block electromagnetic waves, offer no sanctuary either. Guri's ODINI attack generates low-frequency magnetic fields from CPU cores, penetrating these shields.PowerHammer goes further by inducing parasitic signals on building power lines, tappable by attackers monitoring electrical infrastructure.Such persistence underscores the vulnerability of even fortified setups.

While these attacks assume initial malware infection—often via USB or insiders—real-world precedents like Stuxnet validate the threat. Organizations must layer defenses with anomaly detection, hardware restrictions, and continuous monitoring beyond mere air-gapping. Guri's work urges a reevaluation of "secure" isolation strategies in an era of sophisticated side-channel threats.

Intelligent Vehicles Fuel a New Era of Automotive Data Trade


 

In the past, automotive sophistication was measured in mechanical terms. Conversations centered around engine calibration, refinement of drivetrains, suspension geometry, and steering feedback were centered around engine calibration. 

The shorthand used to describe innovation was horsepower output, torque delivery, and braking distance. This hierarchy has been radically altered. It has been estimated that the industry has undergone an unprecedented transformation over the last two years. 

In recent years, electrification has evolved from an ambitious strategy to an expectation among the mainstream. Features subscriptions have reshaped ownership economics in many ways. Driver assistance systems and semiautonomous capabilities have evolved from experimental prototypes to production versions. 

In contrast to mechanical engineering, software now serves as a coequal force that shapes product identity and long-term value for consumers. The consumer increasingly evaluates vehicles based on their digital capabilities, rather than purely mechanical differences. 

As important as acceleration figures and ride quality are, over-the-air update infrastructure, predictive diagnostics, integrated app ecosystems, natural language interfaces, and automated parking functions carry a significant amount of weight. It is not only important for vehicles to perform well on the road, but also that they integrate with digital life, adapt to changes through data, and improve over time. 

The contemporary automobile has evolved not only in terms of its chassis and powertrain, but also through its software stack and network connectivity. Digital architecture is no longer an overlay on a vehicle; it is integral to its design. Technology realignment has been accompanied by an important recalibration of federal AI policy. 

During the first day of his administration, President Donald Trump signed Executive Order 14179, repealing previous directives considered restrictive to domestic AI development. A 2023 framework, which stressed precautionary oversight and risk mitigation, has been superseded by this order. 

According to a previously issued guidance, if AI adoption is irresponsible or inadequately governed, fraud, bias, discrimination, displacement of labor, competitive distortions, and national security vulnerabilities will intensify. Therefore, safeguards are required proportionate to the increasing influence of AI. 

When executive guardrails have been removed, the regulatory environment has been tilted in favor of acceleration and competitive positioning. The implications of AI are immediate for sectors already integrating machine learning into operational infrastructure, such as automobile manufacturers who integrate machine learning into vehicle operating systems, driver monitoring, predictive maintenance and personalization engines. 

Consequently, the federal government has focused on technological leadership and deployment velocity as part of its policy shift. With vehicles becoming increasingly connected computing platforms capable of continuous data capture and algorithmic decision-making, the absence of prescriptive federal constraints creates an opportunity for rapid integration of artificial intelligence-based features across passenger vehicles and commercial fleets. 

As evidenced by the dominant use of artificial intelligence at CES 2026, automakers presented AI as more than just a supplement to next-generation mobility ecosystems, but rather as the enabler layer, accelerating autonomous driving initiatives in particular. 

The Ford executive in charge of electric vehicles, digital platforms, and design, Doug Field, articulated the vision of artificial intelligence as an embedded companion system - an adaptive layer able to synthesize contextual inputs such as driving behavior, geographical location, and vehicle performance. 

In order to simplify decision-making, the objective, he argued, is to interpret complex conditions in real time and translate them into intuitive interactions between driver and machine. Ford plans to implement this vision beginning as early as 2027 by integrating embedded artificial intelligence assistants into all new and refreshed models. This initiative represents the overall shift of the automotive industry towards software-defined vehicle architectures which incorporate cloud connectivity, scalable computing, and continuous training to enhance functionality long after the vehicle has been sold. 

Additionally, the company has taken steps to define its data governance position. The Chief Privacy Officer at Ford, Kristin Jones, has stated publicly that the company does not sell vehicle data, but instead uses it to support connected services and to improve products. 

In communications with customers, the company has made it clear that data practices will be transparent, and that customers will be able to determine if their data is shared for designated purposes. A broader competitive trend is reflected in Ford's approach. Manufacturers across the globe are integrating generative and conversational artificial intelligence engines into the infotainment and vehicle control systems. 

Volkswagen has integrated its IDA assistant with ChatGPT while emphasizing the protection of personal information. With the integration of ChatGPT and Google's Gemini models into Mercedes-Benz's MBUX interface, Mercedes has enhanced its MBUX experience. BMW has presented an AI-based assistant based on Amazon's Alexa+ infrastructure, showcasing its capabilities in a public demonstration. 

In recent years, Tesla has integrated Grok, an artificial intelligence model developed within its larger technology ecosystem, into aspects of its in-vehicle experience—a move attracting scrutiny due to the prior controversy surrounding the model's external application. 

In addition to enhanced voice recognition and natural language command processing, some deployments also include telemetry analysis, driver behavior modeling, contextual personalization, and adaptive cabin intelligence. As Geely presented at CES, the significance of the shift was clearly evident. The company leadership characterized the modern vehicle as a computer-based system rather than a mechanical platform that is enhanced with software. 

In introducing Full-Domain AI 2.0, an intelligent cockpit environment and advanced autonomous driving were supported through a unified framework based on AI 2.0. As part of the accompanying Geely Afari Smart Driving system, perception modules, decision-making engines, and interface layers are integrated into an artificial intelligence stack. This framing was explicit: competitive advantage in the automotive sector is based on algorithmic capability, data throughput, and computation performance as opposed to traditional mechanical differentiation. 

A parallel development in the autonomous driving supply chain reinforces that trajectory. As part of its CES presentation at CES, Nvidia exhibited its open-source Alpamayo family of open-source artificial intelligence models tailored to self-driving applications. 

The growing dependency of autonomous systems on large-scale model training and real-time inference highlights the need for scalable, high-performance computing infrastructure. The Lucid Gravity vehicle architecture was developed in collaboration with Nuro to integrate artificial intelligence technologies into a upcoming robotaxi platform built around the Lucid Gravity vehicle architecture. 

These announcements demonstrate the convergence of automotive engineering, cloud computing, semiconductor innovation, and machine learning technologies. In order to address this challenge, vehicles have evolved into persistent data-generating systems, which collect granular telemetry, geolocation histories, biometric indicators, and inputs from environmental mapping systems. 

The continuous data streams produced by autonomous stacks and AI companions are not guaranteed to be free from secondary repurposing or commercial repurposing across jurisdictions. Historically, adjacent digital industries have demonstrated that monetization incentives and third-party data-sharing arrangements tend to increase when large-scale data ecosystems are established.

As a result of a policy landscape that emphasizes rapid deployment of artificial intelligence (AI), the boundaries governing automotive data flows are uneven, and in some cases undefined. Therefore, commercial logic for data extraction is becoming intrinsically embedded in vehicle development roadmaps. 

There are recurring patterns in regulatory settlements, investigative reports, and litigation: technical capability generally advances more rapidly than governance mechanisms designed to prevent misuse. Despite manufacturers' claims that artificial intelligence systems act as copilots or intelligent assistants, these systems require extensive, continuous data acquisition frameworks which require disciplined oversight to operate. 

The automotive industry may achieve sustainable advancements less by incremental improvements in model performance than by ensuring that the underlying data architecture is robust. It is necessary to translate concepts of privacy-by-design, granular consent interfaces, strict purpose limits, and rigorous data minimization from policy language into technical controls that can be enforced within firmware, vehicle operating systems, and cloud backends. 

Cross-border data-sharing agreements should be expected to be subject to regulatory scrutiny in markets where vehicles are operated. De-identification processes should be auditable and technically valid, rather than declarative.

UK Construction Company’s Windows Server Infiltrated by Prometei Botnet

 



In January 2026, a construction company in the United Kingdom found an unwelcome presence inside one of its Windows servers. Cybersecurity analysts from eSentire’s Threat Response Unit (TRU) determined that the intruder was a long-running malware network known as Prometei, a botnet with links to Russian threat activity and active since at least 2016.

Although Prometei has been widely observed conducting covert cryptocurrency mining, the investigation showed that this malware can do much more than simply generate digital currency. In this case, it was also capable of capturing passwords and potentially enabling remote control of the affected system.

According to the analysis shared with cybersecurity media, this attack did not involve complex hacking techniques. The initial intrusion appears to have occurred because the attackers were able to successfully log into the server using Remote Desktop Protocol (RDP) with weak or default login credentials. Remote Desktop, a tool used to access computers over a network, can be exploited easily if account passwords are simple.

Prometei is not a single program that drops onto a system. Instead, it operates as a collection of tools designed to carry out multiple functions once it gains access. When the malware first infects a machine, it adds a new service with a name such as “UPlugPlay,” and it creates a file called sqhost.exe to ensure that it relaunches automatically every time the server restarts.

Once these persistence mechanisms are in place, the malware downloads its main functional component, often called zsvc.exe, from a command server linked to an entity identified in analysis as Primesoftex Ltd. This payload is transmitted in encrypted form and disguised to avoid detection.

After establishing itself, Prometei collects basic technical information about the infected system by using legitimate Windows utilities. It then employs credential-harvesting techniques that resemble the behaviour of publicly known tools, capturing passwords stored on the server and within the network. In the course of this activity, Prometei commonly leverages the TOR anonymity network to conceal its command and control communications, making it harder for defenders to trace its actions.

Prometei also has built-in countermeasures to evade analysis and detection. For example, the malware checks for the presence of a specific file called mshlpda32.dll. If this file is absent, instead of crashing or revealing obvious malicious behaviour, the malware executes benign-looking operations that mimic routine system tasks. This is a deliberate method to confuse security researchers and automated analysis tools that attempt to study the malware in safe environments.

In a further twist, once Prometei has established a foothold, it also deploys a utility referred to as netdefender.exe. This component monitors failed login attempts and blocks them, effectively locking out other potential attackers. While this might seem beneficial, its purpose is to ensure that the malicious operator retains exclusive control of the compromised server.

To protect systems from similar threats, cybersecurity experts urge organisations to replace default passwords with complex, unique credentials. They recommend implementing multi-factor authentication for remote access services, keeping software up to date with security patches, and monitoring login activity for unusual access attempts. eSentire has also released specialised analysis tools that allow defenders to unpack Prometei’s components and study its behaviour in controlled settings.


Featured