Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Thousands of WordPress Sites at Risk as Motors Theme Flaw Enables Admin Account Takeovers

 

A critical security flaw tracked as CVE-2025-4322 has left a widely used premium WordPress theme exposed to attackers.

Cybercriminals have been exploiting this vulnerability in the Motors theme to seize administrator accounts, allowing them to fully compromise websites—modifying information, inserting fake content, and distributing malicious payloads.

Developed by StylemixThemes, Motors has become especially popular with automotive websites, recording nearly 22,500 purchases on EnvatoMarket. Security researchers first identified the flaw on May 2, 2025, and a fix was issued with version 5.6.68 on May 14. Users who have updated to this version are protected, while those still running versions up to 5.6.67 remain vulnerable.

“This is due to the theme not properly validating a user’s identity prior to updating their password,” Wordfence explained.

“This makes it possible for unauthenticated attackers to change arbitrary user passwords, including those of administrators, and leverage that to gain access to their account.”

Despite the release of the patch, attacks began surfacing as early as May 20. By June 7, researchers observed widespread exploitation, with Wordfence reporting it had already blocked over 23,000 attack attempts. The firm also shared lists of IP addresses involved in the attacks, many launching thousands of intrusion efforts.

“One obvious sign of infection is if a site’s administrator is unable to log in with the correct password as it may have been changed as a result of this vulnerability,” the researchers explained.

To secure their sites, users of the Motors theme are strongly advised to upgrade to version 5.6.68 immediately, which addresses the flaw and prevents further account takeovers.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

Iranian Hackers Threaten More Trump Email Leaks Amid Rising U.S. Cyber Tensions

 

Iran-linked hackers have renewed threats against the U.S., claiming they plan to release more emails allegedly stolen from former President Donald Trump’s associates. The announcement follows earlier leaks during the 2024 presidential race, when a batch of messages was distributed to the media. 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) responded by calling the incident “digital propaganda,” warning it was a calculated attempt to discredit public officials and mislead the public. CISA added that those responsible would be held accountable, describing the operation as part of a broader campaign by hostile foreign actors to sow division. 

Speaking virtually with Reuters, a hacker using the alias “Robert” claimed the group accessed roughly 100 GB of emails from individuals including Trump adviser Roger Stone, legal counsel Lindsey Halligan, White House chief of staff Susie Wiles, and Trump critic Stormy Daniels. Though the hackers hinted at selling the material, they provided no specifics or content. 

The initial leaks reportedly involved internal discussions, legal matters, and possible financial dealings involving RFK Jr.’s legal team. Some information was verified, but had little influence on the election, which Trump ultimately won. U.S. authorities later linked the operation to Iran’s Revolutionary Guard, though the hackers declined to confirm this. 

Soon after Trump ordered airstrikes on Iranian nuclear sites, Iranian-aligned hackers began launching cyberattacks. Truth Social, Trump’s platform, was briefly knocked offline by a distributed denial-of-service (DDoS) attack claimed by a group known as “313 Team.” Security experts confirmed the group’s ties to Iranian and pro-Palestinian cyber networks. 

The outage occurred shortly after Trump posted about the strikes. Users encountered error messages, and monitoring organizations warned that “313 Team” operates within a wider ecosystem of groups supporting anti-U.S. cyber activity. 

The Department of Homeland Security (DHS) issued a national alert on June 22, citing rising cyber threats linked to Iran-Israel tensions. The bulletin highlighted increased risks to U.S. infrastructure, especially from loosely affiliated hacktivists and state-backed cyber actors. DHS also warned that extremist rhetoric could trigger lone-wolf attacks inspired by Iran’s ideology. 

Federal agencies remain on high alert, with targeted sectors including defense, finance, and energy. Though large-scale service disruptions have not yet occurred, cybersecurity teams have documented attempted breaches. Two groups backing the Palestinian cause claimed responsibility for further attacks across more than a dozen U.S. sectors. 

At the same time, the U.S. faces internal challenges in cyber preparedness. The recent dismissal of Gen. Timothy Haugh, who led both the NSA and Cyber Command, has created leadership uncertainty. Budget cuts to election security programs have added to concerns. 

While a military ceasefire between Iran and Israel may be holding, experts warn the cyber conflict is far from over. Independent threat actors and ideological sympathizers could continue launching attacks. Analysts stress the need for sustained investment in cybersecurity infrastructure—both public and private—as digital warfare becomes a long-term concern.

Russian APT28 Targets Ukraine Using Signal to Deliver New Malware Families

 

The Russian state-sponsored threat group APT28, also known as UAC-0001, has been linked to a fresh wave of cyberattacks against Ukrainian government targets, using Signal messenger chats to distribute two previously undocumented malware strains—BeardShell and SlimAgent. 

While the Signal platform itself remains uncompromised, its rising adoption among government personnel has made it a popular delivery vector for phishing attacks. Ukraine’s Computer Emergency Response Team (CERT-UA) initially discovered these attacks in March 2024, though critical infection vector details only surfaced after ESET notified the agency in May 2025 of unauthorised access to a “gov.ua” email account. 

Investigations revealed that APT28 used Signal to send a macro-laced Microsoft Word document titled "Акт.doc." Once opened, it initiates a macro that drops two payloads—a malicious DLL file (“ctec.dll”) and a disguised PNG file (“windows.png”)—while modifying the Windows Registry to enable persistence via COM-hijacking. 

These payloads execute a memory-resident malware framework named Covenant, which subsequently deploys BeardShell. BeardShell, written in C++, is capable of downloading and executing encrypted PowerShell scripts, with execution results exfiltrated via the Icedrive API. The malware maintains stealth by encrypting communications using the ChaCha20-Poly1305 algorithm. 

Alongside BeardShell, CERT-UA identified another tool dubbed SlimAgent. This lightweight screenshot grabber captures images using multiple Windows API calls, then encrypts them with a combination of AES and RSA before local storage. These are presumed to be extracted later by an auxiliary tool. 

APT28’s involvement was further corroborated through their exploitation of vulnerabilities in Roundcube and other webmail software, using phishing emails mimicking Ukrainian news publications to exploit flaws like CVE-2020-35730, CVE-2021-44026, and CVE-2020-12641. These emails injected malicious JavaScript files—q.js, e.js, and c.js—to hijack inboxes, redirect emails, and extract credentials from over 40 Ukrainian entities. CERT-UA recommends organisations monitor traffic linked to suspicious domains such as “app.koofr.net” and “api.icedrive.net” to detect any signs of compromise.

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.

Think Twice Before Using Text Messages for Security Codes — Here’s a Safer Way

 



In today’s digital world, many of us protect our online accounts using two-step verification. This process, known as multi-factor authentication (MFA), usually requires a password and an extra code, often sent via SMS, to log in. It adds an extra layer of protection, but there’s a growing concern: receiving these codes through text messages might not be as secure as we think.


Why Text Messages Aren’t the Safest Option

When you get a code on your phone, you might assume it’s sent directly by the company you’re logging into—whether it’s your bank, email, or social media. In reality, these codes are often delivered by external service providers hired by big tech firms. Some of these third-party firms have been connected to surveillance operations and data breaches, raising serious concerns about privacy and security.

Worse, these companies operate with little public transparency. Several investigative reports have highlighted how this lack of oversight puts user information at risk. Additionally, government agencies such as the U.S. Cybersecurity and Infrastructure Security Agency (CISA) have warned people not to rely on SMS for authentication. Text messages are not encrypted, which means hackers who gain access to a telecom network can intercept them easily.


What Should You Do Instead?

Don’t ditch multi-factor authentication altogether. It’s still a critical defense against account hijacking. But you should consider switching to a more secure method—such as using an authenticator app.


How Authenticator Apps Work

Authenticator apps are programs installed on your smartphone or computer. They generate temporary codes for your accounts that refresh every 30 seconds. Because these codes live inside your device and aren’t sent over the internet or phone networks, they’re far more difficult for criminals to intercept.

Apps like Google Authenticator, Microsoft Authenticator, LastPass, and even Apple’s built-in password tools provide this functionality. Most major platforms now allow you to connect an authenticator app instead of relying on SMS.


Want Even Better Protection? Try Passkeys

If you want the most secure login method available today, look into passkeys. These are a newer, password-free login option developed by a group of leading tech companies. Instead of typing in a password or code, you unlock your account using your face, fingerprint, or device PIN.

Here’s how it works: your device stores a private key, while the website keeps the matching public key. Only when these two keys match—and you prove your identity through a biometric scan — are you allowed to log in. Because there are no codes or passwords involved, there’s nothing for hackers to steal or intercept.

Passkeys are also backed up to your cloud account, so if you lose your device, you can still regain access securely.


Multi-factor authentication is essential—but how you receive your codes matters. Avoid text messages when possible. Opt for an authenticator app, or better yet, move to passkeys where available. Taking this step could be the difference between keeping your data safe or leaving it vulnerable.

Nucor Restores Operations After May Cyberattack, Expects Strong Q2 Earnings

 

Nucor, the largest steel producer in the United States, announced it has resumed normal operations after a cyberattack in May that exposed a limited amount of data.

According to a filing with the Securities and Exchange Commission, the company believes it has successfully removed the hackers from its systems and does not anticipate any material impact on its financial results or operations.

“The incident temporarily limited our ability to access certain functions and some facilities,” Nucor stated. To investigate and recover from the breach, the company engaged external forensic specialists. 

As part of its response, Nucor temporarily shut down its systems and restored portions of its data using backup files. The company has since collaborated with outside experts to strengthen its IT infrastructure against future intrusions.

Headquartered in Charlotte, North Carolina, Nucor produces approximately 25% of the nation’s raw steel. Last week, the company said it expects second-quarter earnings per share to range between $2.55 and $2.65 for the fiscal period ending July 5. Earnings are projected to grow across all three operating segments, with the most significant gains anticipated in its steel mills business, driven by higher average selling prices for sheet and plate products.

Nucor has not shared specific details about the financial consequences of the cyberattack. The company plans to release its earnings report on July 28, followed by a conference call on July 29.

FIR Filed After Noida Logistics Company Claims User Data Leaked

 

High-profile clients' private information, including that of top government officials, was leaked due to a significant cybersecurity incident at Agarwal Packers and Movers Ltd (APML) in India. Concerns over the security of corporate data as well as possible national security implications have been raised by the June 1 incident. An inquiry is still under progress after police filed a formal complaint. 

In what could be one of the most sensitive data breaches in recent memory, Agarwal Packers and Movers Ltd (APML), a well-known logistics company with its headquarters located in Sector 60, Noida, has disclosed that private client information, including the addresses and phone numbers of senior government clients, has been stolen. 

The intrusion was detected on June 1 after several clients, including prominent bureaucrats, diplomats, and military people, began receiving suspicious, highly targeted phone calls.

"The nature of the calls strongly indicated that the callers had access to specific customer queries and records related to upcoming relocations," the complainant, Jaswinder Singh Ahluwalia, Group President and CEO of APML, stated in the police FIR. He cautioned that this is more than just a disclosure of company data. It has an impact on personal privacy, public trust, and possibly national security. 

The company initiated an internal technical inspection, which uncovered traces of unauthorised cyber infiltration, confirming worries regarding a breach. The audit detected collaboration between internal personnel and external cybercriminals. While the scope of the hack is still being investigated, its significance is undeniable: the firm serves India's elite, making the stolen data a potential goldmine for bad actors. 

In accordance with Sections 318(4) and 319(2) of the Bharatiya Nyaya Sanhita and Sections 66C (identity theft) and 66D (impersonation by computer resource) of the Information Technology Act, a formal complaint was filed at the Sector 36 Cyber Crime Police Station. 

According to Cyber SHO Ranjeet Singh, they have a detailed complaint with technological proof to back it up. At the moment, their cyber unit is looking through access trails, firewall activity, and internal server records. Due to the nature of clients impacted, the issue is being handled with the highest attention. 

The attack has triggered calls for stricter cybersecurity practices in private companies that serve sensitive sectors. While APML has yet to reveal how many people were affected, its internal records allegedly include relocation information for high-level clientele like as judges, intelligence officers, and foreign dignitaries.

Palo Alto Detects New Prometei Botnet Attacks Targeting Linux Servers

Cybersecurity analysts from Palo Alto Networks’ Unit 42 have reported a resurgence of the Prometei botnet, now actively targeting Linux systems with new, upgraded variants as of March 2025. Originally discovered in 2020 when it was aimed at Windows machines, Prometei has since expanded its reach. 

Its Linux-based malware strain has been in circulation since late 2020, but recent versions—designated as 3.x and 4.x—demonstrate significant upgrades in their attack capabilities. The latest Prometei malware samples are equipped with remote control functionality, domain generation algorithms (DGA) to ensure connection with attacker-controlled servers, and self-updating systems that help them remain undetected. This renewed activity highlights the botnet’s growing sophistication and persistent threat across global networks. 

At its core, Prometei is designed to secretly mine Monero cryptocurrency, draining the resources of infected devices. However, it also engages in credential harvesting and can download additional malicious software depending on the attacker’s goals. Its modular framework allows individual components to carry out specific tasks, including brute-force attacks, vulnerability exploitation (such as EternalBlue and SMB bugs), mining operations, and data exfiltration. 

The malware is typically delivered via HTTP GET requests from rogue URLs like hxxp://103.41.204[.]104/k.php. Prometei uses 64-bit Linux ELF binaries that extract and execute payloads directly in memory. These binaries also carry embedded configuration data in a JSON format, containing fields such as encryption keys and tracking identifiers, making them harder to analyze and block. 

Once a system is compromised, the malware collects extensive hardware and software information—CPU details, OS version, system uptime—and sends this back to its command-and-control (C2) servers, including addresses like hxxp://152.36.128[.]18/cgi-bin/p.cgi. Thanks to DGA and self-update features, Prometei ensures consistent communication with attacker infrastructure and adapts to security responses on the fly.  

To defend against these threats, Palo Alto Networks advises using advanced detection tools such as Cortex XDR, WildFire, and their Advanced Threat Prevention platform. These technologies utilize real-time analytics and machine learning to identify and contain threats. Organizations facing a breach can also contact Palo Alto’s Unit 42 incident response team for expert help. 

The activity observed from March to April 2025 underlines the continued evolution of the Prometei botnet and the growing risk it poses to businesses relying on Linux environments. Strengthening cybersecurity protocols and remaining alert to new threats is essential in today’s threat landscape.

Russian Threat Actors Circumvent Gmail Security with App Password Theft


 

As part of Google's Threat Intelligence Group (GTIG), security researchers discovered a highly sophisticated cyber-espionage campaign orchestrated by Russian threat actors. They succeeded in circumventing Google's multi-factor authentication (MFA) protections for Gmail accounts by successfully circumventing it. 

A group of researchers found that the attackers used highly targeted and convincing social engineering tactics by impersonating Department of State officials in order to establish trust with their victims in the process. As soon as a rapport had been built, the perpetrators manipulated their victims into creating app-specific passwords. 

These passwords are unique 16-character codes created by Google which enable secure access to certain applications and devices when two-factor authentication is enabled. As a result of using these app passwords, which bypass conventional two-factor authentication, the attackers were able to gain persistent access to sensitive emails through Gmail accounts undetected. 

It is clear from this operation that state-sponsored cyber actors are becoming increasingly inventive, and there is also a persistent risk posed by seemingly secure mechanisms for recovering and accessing accounts. According to Google, this activity was carried out by a threat cluster designated UNC6293, which is closely related to the Russian hacking group known as APT29. It is believed that UNC6293 has been closely linked to APT29, a state-sponsored hacker collective. 

APT29 has garnered attention as one of the most sophisticated and sophisticated Advanced Persistent Threat (APT) groups sponsored by the Russian government, and according to intelligence analysts, that group is an extension of the Russian Foreign Intelligence Service (SVR). It is important to note that over the past decade this clandestine collective has orchestrated a number of high-profile cyber-espionage campaigns targeting strategic entities like the U.S. government, NATO member organizations, and prominent research institutes all over the world, including the U.S. government, NATO, and a wide range of academic institutions. 

APT29's operators have a reputation for carrying out prolonged infiltration operations that can remain undetected for extended periods of time, characterised by their focus on stealth and persistence. The tradecraft of their hackers is consistently based on refined social engineering techniques that enable them to blend into legitimate communications and exploit the trust of their intended targets through their tradecraft. 

By crafting highly convincing narratives and gradually manipulating individuals into compromising security controls in a step-by-step manner, APT29 has demonstrated that it has the ability to bypass even highly sophisticated technical defence systems. This combination of patience, technical expertise, and psychological manipulation has earned the group a reputation as one of the most formidable cyber-espionage threats associated with Russian state interests. 

A multitude of names are used by this prolific group in the cybersecurity community, including BlueBravo, Cloaked Ursa, Cosy Bear, CozyLarch, ICECAP, Midnight Blizzard, and The Dukes. In contrast to conventional phishing campaigns, which are based on a sense of urgency or intimidation designed to elicit a quick response, this campaign unfolded in a methodical manner over several weeks. 

There was a deliberate approach by the attackers, slowly creating a sense of trust and familiarity with their intended targets. To make their deception more convincing, they distributed phishing emails, which appeared to be official meeting invitations that they crafted. Often, these messages were carefully constructed to appear authentic and often included the “@state.gov” domain as the CC field for at least four fabricated email addresses. 

The aim of this tactic was to create a sense of legitimacy around the communication and reduce the likelihood that the recipients would scrutinise it, which in turn increased the chances of the communication being exploited effectively. It has been confirmed that the British writer, Keir Giles, a senior consulting fellow at Chatham House, a renowned global affairs think tank, was a victim of this sophisticated campaign. 

A report indicates Giles was involved in a lengthy email correspondence with a person who claimed to be Claudia S Weber, who represented the U.S. Department of State, according to reports. More than ten carefully crafted messages were sent over several weeks, deliberately timed to coincide with Washington's standard business hours. Over time, the attacker gradually gained credibility and trust among the people who sent the messages. 

It is worth noting that the emails were sent from legitimate addresses, which were configured so that no delivery errors would occur, which further strengthened the ruse. When this trust was firmly established, the adversary escalated the scheme by sending a six-page PDF document with a cover letter resembling an official State Department letterhead that appeared to be an official State Department document. 

As a result of the instructions provided in the document, the target was instructed to access Google's account settings page, to create a 16-character app-specific password labelled "ms.state.gov, and to return the code via email under the guise of completing secure onboarding. As a result of the app password, the threat actors ended up gaining sustained access to the victim's Gmail account, bypassing multi-factor authentication altogether as they were able to access their accounts regularly. 

As the Citizen Lab experts were reviewing the emails and PDF at Giles' request, they noted that the emails and PDF were free from subtle language inconsistencies and grammatical errors that are often associated with fraudulent communications. In fact, based on the precision of the language, researchers have suspected that advanced generative AI tools have been deployed to craft polished, credible content for the purpose of evading scrutiny and enhancing the overall effectiveness of the deception as well. 

There was a well-planned, incremental strategy behind the attack campaign that was specifically geared towards increasing the likelihood that the targeted targets would cooperate willingly. As one documented instance illustrates, the threat actor tried to entice a leading academic expert to participate in a private online discussion under the pretext of joining a secure State Department forum to obtain his consent.

In order to enable guest access to Google's platform, the victim was instructed to create an app-specific password using Google's account settings. In fact, the attacker used this credential to gain access to the victim's Gmail account with complete control over all multi-factor authentication procedures, enabling them to effectively circumvent all of the measures in place. 

According to security researchers, the phishing outreach was carefully crafted to look like a routine, legitimate onboarding process, thus making it more convincing. In addition to the widespread trust that many Americans place in official communications issued by U.S. government institutions, the attackers exploited the general lack of awareness of the dangers of app-specific passwords, as well as their widespread reliance on official communications. 

A narrative of official protocol, woven together with professional-sounding language, was a powerful way of making the perpetrators more credible and decreasing the possibility of the target questioning their authenticity in their request. According to cybersecurity experts, several individuals who are at higher risk from this campaign - journalists, policymakers, academics, and researchers - should enrol in Google's Advanced Protection Program (APP). 

A major component of this initiative is the restriction of access to only verified applications and devices, which offers enhanced safeguards. The experts also advise organisations that whenever possible, they should disable the use of app-specific passwords and set up robust internal policies that require any unusual or sensitive requests to be verified, especially those originating from reputable institutions or government entities, as well as implement robust internal policies requiring these types of requests. 

The intensification of training for personnel most vulnerable to these prolonged social engineering attacks, coupled with the implementation of clear, secure channels for communication between the organisation and its staff, would help prevent the occurrence of similar breaches in the future. As a result of this incident, it serves as an excellent reminder that even mature security ecosystems remain vulnerable to a determined adversary combining psychological manipulation with technical subterfuge when attempting to harm them. 

With threat actors continually refining their methods, organisations and individuals must recognise that robust cybersecurity is much more than merely a set of tools or policies. In order to combat cyberattacks as effectively as possible, it is essential to cultivate a culture of vigilance, scepticism, and continuous education. In particular, professionals who routinely take part in sensitive research, diplomatic relations, or public relations should assume they are high-value targets and adopt a proactive defence posture. 

Consequently, any unsolicited instructions must be verified by a separate, trusted channel, hardware security keys should be used to supplement authentication, and account settings should be reviewed regularly for unauthorised changes. For their part, institutions should ensure that security protocols are both accessible and clearly communicated as they are technically sound by investing in advanced threat intelligence, simulating sophisticated phishing scenarios, and investing in advanced threat intelligence. 

Fundamentally, resilience against state-sponsored cyber-espionage is determined by the ability to plan in advance not only how adversaries are going to deploy their tactics, but also the trust they will exploit in order to reach their goals.

North Korean Hackers Target Crypto Professionals With Info-Stealing Malware

 

North Korean hackers are tricking crypto experts into attending elaborate phoney job interviews in order to access their data and install sophisticated malware on their devices. 

Cisco Talos disclosed earlier this week that a new Python-based remote access trojan called "PylangGhost" links malware to a North Korean hacking group dubbed "Famous Chollima," also known as "Wagemole.” "Based on the advertised positions, it is clear that the Famous Chollima is broadly targeting individuals with previous experience in cryptocurrency and blockchain technologies," the researchers explained. 

The effort uses fake employment sites that mimic reputable businesses like Coinbase, Robinhood, and Uniswap to recruit blockchain and crypto experts in India. The scam begins with bogus recruiters guiding job seekers to skill-testing websites, where they submit personal information and answer technical questions. 

Following completion of the assessments, candidates are directed to allow camera access for a video interview, and then urged to copy and execute malicious commands masked as video driver installations. 

Dileep Kumar H V, director of Digital South Trust, told Decrypt that to combat these scams, "India must mandate cybersecurity audits for blockchain firms and monitor fake job portals.” “CERT-In should issue red alerts, while MEITY and NCIIPC must strengthen global coordination on cross-border cybercrime,” he stated, calling for “stronger legal provisions” under the IT Act and “digital awareness campaigns.” 

The recently identified PylangGhost malware has the ability to harvest session cookies and passwords from more than 80 browser extensions, including well-known crypto wallets and password managers like Metamask, 1Password, NordPass, and Phantom. The Trojan runs remote commands from command-and-control servers and gains continuous access to compromised systems. 

This most recent operation fits in with North Korea's larger trend of cybercrime with a crypto focus, which includes the infamous Lazarus Group, which has been involved in some of the biggest heists in the industry. The regime is now focussing on individual professionals to obtain intelligence and possibly infiltrate crypto organisations from within, in addition to stealing money straight from exchanges. 

With campaigns like "Contagious Interview" and "DeceptiveDevelopment," the gang has been launching hiring-based attacks since at least 2023. These attacks have targeted cryptocurrency developers on platforms like GitHub, Upwork, and CryptoJobsList.

BitoPro Blames North Korea’s Lazarus Group for $11 Million Crypto Theft During Hot Wallet Update

 

Taiwanese cryptocurrency exchange BitoPro has attributed a major cyberattack that resulted in the theft of approximately $11 million in digital assets to the infamous North Korean hacking group Lazarus. The breach occurred on May 8, 2025, when attackers exploited vulnerabilities during a hot wallet system upgrade.

According to BitoPro, its internal investigation uncovered evidence linking the incident to Lazarus, citing similarities in techniques and tactics observed in previous large-scale intrusions.

“The attack methodology bears resemblance to patterns observed in multiple past international major incidents, including illicit transfers from global bank SWIFT systems and asset theft incidents from major international cryptocurrency exchanges,” reads the company’s announcement.

BitoPro, which serves primarily Taiwanese customers and offers fiat currency transactions in TWD alongside various crypto assets, has over 800,000 registered users and processes nearly $30 million in trading volume each day.

During the attack, unauthorized withdrawals were conducted from an older hot wallet across multiple blockchains, including Ethereum, Tron, Solana, and Polygon. The stolen funds were subsequently funneled through decentralized exchanges and mixing services such as Tornado Cash, ThorChain, and Wasabi Wallet to obscure their origin.

Although the breach took place in early May, BitoPro publicly acknowledged the incident only on June 2, assuring users that platform operations remained unaffected and that impacted wallets were replenished using reserves.

The subsequent investigation concluded there was no evidence of insider involvement. Instead, attackers had carried out a sophisticated social engineering campaign that compromised an employee’s device responsible for managing cloud operations. Through this infection, they hijacked AWS session tokens, effectively bypassing multi-factor authentication protections to gain access to BitoPro’s cloud infrastructure.

The hackers’ command-and-control server then issued instructions to implant malicious scripts into the hot wallet host in preparation for the heist. By carefully simulating legitimate activity, they were able to transfer assets undetected when the wallet upgrade took place.

Once BitoPro became aware of the unauthorized activity, it deactivated the hot wallet system and rotated cryptographic keys, though by that point, roughly $11 million had already been drained.

The exchange has notified relevant authorities and collaborated with external cybersecurity specialists to conduct a thorough review, which concluded on June 11.

The Lazarus Group has developed a notorious reputation for targeting cryptocurrency platforms and decentralized finance ecosystems, with previous operations including a record-setting $1.5 billion theft from Bybit.

U.S. Senators Propose New Task Force to Tackle AI-Based Financial Scams

 


In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.

The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.

The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.


This report is expected to outline:

• How financial institutions can better use AI to stop fraud before it happens,

• Ways to protect consumers from being misled by deepfake content, and

• Policy and regulatory recommendations for addressing this evolving threat.


One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.

According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.

While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.

Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.

As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.

Lazarus Group Suspected in $11M Crypto Heist Targeting Taiwan’s BitoPro Exchange

 

Taiwanese cryptocurrency platform BitoPro has blamed North Korea’s Lazarus Group for a cyberattack that resulted in $11 million in stolen digital assets. The breach occurred on May 8, 2025, during an upgrade to the exchange’s hot wallet system. 

According to BitoPro, the tactics and methods used by the hackers closely resemble those seen in other global incidents tied to the Lazarus Group, including high-profile thefts via SWIFT banking systems and other major crypto platforms. BitoPro serves a primarily Taiwanese customer base, offering fiat transactions in TWD alongside various cryptocurrencies. 

The exchange currently supports over 800,000 users and processes approximately $30 million in daily trades. The attack exploited vulnerabilities during a system update, enabling the unauthorized withdrawal of funds from a legacy hot wallet spread across several blockchain networks, including Ethereum, Tron, Solana, and Polygon. The stolen cryptocurrency was then quickly laundered through decentralized exchanges and mixers such as Tornado Cash, Wasabi Wallet, and ThorChain, making recovery and tracing more difficult. 

Despite the attack taking place in early May, BitoPro only publicly acknowledged the breach on June 2. At that time, the exchange assured users that daily operations remained unaffected and that the compromised hot wallet had been replenished from its reserve funds. Following a thorough investigation, the exchange confirmed that no internal staff were involved. 

However, the attackers used social engineering tactics to infect a cloud administrator’s device with malware. This allowed them to steal AWS session tokens, bypass multi-factor authentication, and gain unauthorized access to BitoPro’s cloud infrastructure. From there, they were able to insert scripts directly into the hot wallet system and carry out the theft while mimicking legitimate activity to avoid early detection. 

After discovering the breach, BitoPro deactivated the affected wallet system and rotated its cryptographic keys, though the damage had already been done. The company reported the incident to authorities and brought in a third-party cybersecurity firm to conduct an independent review, which concluded on June 11. 

The Lazarus Group has a long history of targeting cryptocurrency and decentralized finance platforms. This attack on BitoPro adds to their growing list of cyber heists, including the recent $1.5 billion digital asset theft from the Bybit exchange.

Malicious Copycat Repositories Emerge in Large Numbers on GitHub

 


The researchers at the National Cyber Security Agency have identified a sophisticated campaign that involved malicious actors uploading more than 67 deceptive repositories to GitHub, masquerading as legitimate Python-based security and hacking tools. 

In truth, these repositories actually serve as a vehicle through which trojanized payloads are injected into the system, thus compromising unsuspecting developers and security professionals. In a report by ReversingLabs under the codename Banana Squad, uncovered in 2023, that an earlier wave of attacks appeared to be an extension of that earlier wave, it appears that this operation is an extension of the earlier attack wave. 

During the previous campaign, counterfeit Python packages were distributed by the Python Package Index (PyPI) and were downloaded over 75,000 times and included the information-stealing capability that targeted Windows environments in particular. With their pivotal focus on GitHub, the attackers are taking advantage of the platform’s reputation as a trusted source for open-source software to make their malicious code more likely to infiltrate, thus expanding their malicious code’s reach. 

As a result of this evolving threat, it is becoming increasingly obvious that the software supply chain is facing persistent threats, and ensuring that packages and repositories are authenticated before they are integrated into development workflows is of utmost importance. Banana Squad was responsible for orchestrating the deployment of nearly 70 malicious repositories in its most recent operation, all carefully crafted to resemble genuine Python-based hacking utilities. 

It is important to note that the counterfeit repositories were designed in such a way that their names and file structures closely resembled those of reputable open-source projects already hosted on GitHub, giving them the appearance of being trustworthy at first glance. This group of hackers cleverly exploited a relatively overlooked feature of the GitHub code display interface in order to conceal their malicious intent further. 

There is a specific issue in which GitHub does not automatically wrap code lines on the next line if they exceed the width of the viewing window; rather, when the contents extend off the right edge of the screen indefinitely, GitHub will automatically wrap them onto the next line. This subtle quirk was tapped into by the attackers, who embedded a substantial stretch of empty space at the end of seemingly benign code lines, effectively pushing the malicious payload beyond the visible area of the code. 

Even when a diligent review of the code is conducted, it may not be possible to detect the hidden threat, unless the reviewer scrolls horizontally to the very end of each line, thus creating a blind spot for the concealed threat. Using this technique of obscuring software repositories and propagating malware under the guise of legitimate tools, threat actors are using an increasingly creative approach to evading detection and highlights the fact that they are using increasingly creative methods to evade detection. 

This Banana Squad activity does not represent an isolated incident. It is an excellent example of a broader trend in which cybercriminal groups are using GitHub to distribute malicious code in an increasing number of cases. It has become increasingly clear that threat actors are utilising the platform as a convenient delivery channel to reach out to a wide range of unaware developers and hobbyists over the past several months. 

The researchers at Trend Micro, for example, have recently discovered that 76 malicious projects have been attributed to the Water Curse group over the past few months. There was careful engineering involved in crafting these repositories so that they would deliver staged payloads that would harvest passwords, browser cookies, and other session data, as well as implement stealthy tools designed to enable persistent access to compromised computers. 

Another investigation by Check Point shed light on how the Stargazer's Ghost Network operated, a complex fraud scheme that relied on creating numerous fraudulent GitHub accounts to carry out its activities. A ghost profile was constructed by using stars, forks, and frequent updates, which mimicked the activity of legitimate developers, so that it appeared genuine, so that it would appear genuine to potential victims. This sophisticated ruse arose from the attackers' attempt to manipulate the popularity of their repositories to promote Java-based malware aimed at Minecraft players.

By doing so, they pushed the repositories to the top of GitHub's search rankings and made them more credible to potential users. According to research conducted by Check Point and Checkmarx, it appears that the Stargazer's Ghost Network is a small part of a larger underground ecosystem built around distribution-as-a-service models that may be the basis of much larger underground economies. It is essentially the same as renting out delivery infrastructure in mainstream organisations as they do in a cloud-based environment. 

As a result of their own research, Sophos analysts were able to confirm this perspective, revealing 133 compromised GitHub repositories which have been active since mid-2022. The malicious projects were capable of concealing harmful code in various forms, including Visual Studio build scripts, Python files that have been manipulated and JavaScript snippets that were used to manipulate screensavers. When the implants are executed, they can gather system information, capture screenshots, and launch notorious remote access trojans like Lumma Stealer, Remcos, and AsyncRAT.

Sophos also reported that operators often use Discord channels and YouTube tutorials to spread links to their repositories, typically offering quick game hacks or easy-to-use cyberattack tools as a means of spreading the word about the repositories. It has been proven to be a highly effective method of attracting novice users, who inadvertently compile and run malware on their machines, thereby turning themselves into unsuspecting victims of the very schemes they hoped to use.

Since GitHub is regarded as the world's leading platform for collaborating on open-source software, cybercriminals are naturally going to be interested in infiltrating these environments, as it is the world's largest hosting and collaboration platform for open-source software. In contrast to package registries such as npm or PyPI, people have historically preferred to adopt code from GitHub repositories to package registries for mass compromise because they are inherently more manual and require several deliberate steps in order to adopt the code. 

In order for a developer to be able to integrate a repository into their project, they must locate that repository, evaluate its credibility, clone it locally, and often perform a cursory code review during that process. These barriers create further barriers for attackers who wish to distribute malware across an extremely large range of networks by utilising source repository tools. 

In spite of this, the recent switch by groups like Banana Squad from traditional package registries to GitHub repositories may indicate a changing threat landscape shaped by stronger defensive measures that are being implemented within those registries. In the last two years, the majority of open-source ecosystems have made substantial security improvements to prevent malicious packages from spreading throughout their ecosystems. 

It is worth mentioning that Python Package Index (PyPI) recently implemented mandatory two-factor authentication (2FA) for all users of its system. As a result of these measures, ReversingLabs researchers are already experiencing measurable results. These measures are currently raising the bar for attackers seeking to hijack or impersonate trusted maintainers. 

In the opinion of Simons, one of the firm's principal analysts, the open-source community has become progressively more vigilant about scrutinising suspicious packages and reporting them. In today's society, adversaries are increasingly aware of the risks involved in sustaining malicious campaigns. As a result, they are finding it increasingly difficult to keep the campaigns going without being rapidly detected and removed. 

It is Simmons' contention that the combination of stricter platform policies, together with a more security-conscious user base, has resulted in a dramatic reduction in successful attacks. This trend has been supported by empirical evidence: According to ReversingLabs' report, malicious packages identified across npm, PyPI, and RubyGems declined by over 70% between 2023 and 2024. 

As a result of this decline in attacks, it is important to emphasize the progress that has been made within the package registry in regards to defensive initiatives; however, it is vital to also notice the adaptability of threat actors, who may now be shifting their focus to repositories where security controls and community vigilance aren't as robust as they used to be. 

Developers need to make sure that they exercise the same level of scrutiny when adopting code from repositories as they do when installing packages, since attackers continue to take advantage of any channel in their arsenal to spread their payloads across the Internet. In the future, the increased malicious activity against GitHub underscores an important point: as defenders strengthen security controls in one area of the software ecosystem, adversaries will invariably pivot to exploit the next weak spot in the software ecosystem. 

To achieve success in this dynamic, there needs to be a renewed commitment to embedding security as a shared responsibility rather than an afterthought across the open-source community. It is important for developers to adopt a security-in-depth approach that combines technical safeguards-such as cryptographic signatures, automated dependency scans, and sandboxed testing environments-with organisational practices emphasising the verification of sources and community trust signals in order to promote a defence-in-depth mindset. 

Platform providers must continue to invest in proactive threat hunting capabilities, improvements in detecting automated and manipulated accounts, and clearer mechanisms for users to evaluate the reputation and integrity of repositories when evaluating the provenance and integrity of data storage services. 

Educating contributors and maintaining users about the signs of tampering remains vitaltoo equip both novice contributors and experienced maintainers with the skills necessary to recognise subtle indications of tampering and deception, which remain crucial. It has become apparent that the open-source ecosystem is evolving.

Only a collaborative and adaptive approach, rooted in transparency, accountability, and constant vigilance, will be able to effectively blunt the effects of campaigns such as Banana Squad, thereby safeguarding the enormous value open-source innovation offers to individuals and organisations throughout the world.