Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label cybersecurity trends. Show all posts

Google Strengthens Ad Safety by Blocking 8.3 Billion Ads and Unveils Android 17 Privacy Changes


 

Google revealed in its latest transparency report that it has stepped up its efforts to secure the Android ecosystem, blocking more than 1.75 million apps that violate its policies from reaching the Play Store by the end of 2025. 

In addition, the company has taken decisive measures against repeat offenders, banning more than 80,000 developer accounts which are identified as providing harmful or deceptive applications. Over 255,000 apps have been prevented from obtaining excessive or unnecessary access to sensitive user data by Google, a move that is growing in importance with tightening global privacy standards. 

In addition to outright removals, Google has interfered earlier in the lifecycle of the app as well. These outcomes are attributed to a combination of stricter verification processes, expanded mandatory review procedures, and more rigorous pre-release testing requirements implemented by the company. 

Parts of the developer community have expressed disagreement with these measures. In addition to these platform-level controls, Google also released 35 policy updates over the course of the year, broadening its enforcement focus across the digital advertising landscape. The prevalence of violations tied to copyright abuse, financial fraud, and scam-driven campaigns has increased in recent years. 

A parallel expansion of Google's enforcement beyond app distribution is evident in its latest Ads Safety Report, which highlights a parallel stepping up of oversight across its advertising infrastructure, highlighting the magnitude and complexity of abuse within the digital ad ecosystem. More than 8.3 billion ads were blocked or removed during the course of 2025. Additionally, 4.8 billion ads were restricted and approximately 24.9 million advertiser accounts were suspended for violating policy. 

The effectiveness of these controls is evidenced by the fact that the majority of non-compliant ads received were intercepted and removed before they could be delivered to users, indicating an increase in proactive detection and enforcement efforts. There were 1.29 billion blocked or removed ads as a result of abuse of the advertising network, the largest category based on a closer look at violations. 

There were substantial numbers of violations related to personalisation, legal compliance failure, and misrepresentations, as well as a number of other high-risk segments that continued to require significant regulatory attention, including financial services, sexually explicit content, and copyright violations. 

Combined, these figures indicate a maturing enforcement model capable of not only reacting reactively but systematically anticipating misuse patterns affecting both advertiser behavior and content distribution channels. In addition to its enforcement-driven approach, Google is also reshaping Android's underlying permission architecture in order to address long-standing privacy concerns. It has been announced that Android 17 has been accompanied by new policy updates that concentrate on refining how applications handle highly sensitive information such as contacts and location information. 

As part of this change, the standardized Contact Picker will provide users with an interface that is secure and searchable, allowing them to grant access only to those contacts explicitly selected, rather than exposing all their contacts. There is a significant difference between this and earlier practices in which applications were able to gain unrestricted access to all stored contact data due to the broad READ_CONTACTS permission. 

By aligning access controls with the principle of data minimization, developers are required to specify specific data requirements, such as individual fields like phone numbers or email addresses. In addition, compliance measures mandate that the default access pathway be the Contact Picker or Android Sharesheet, with full contact access only permitted for exceptional cases which must be justified formally through Play Console declarations. 

Additionally, Google has developed a new mechanism for controlled location access that incorporates a streamlined permission prompt that allows the request of precise location data to be made one time. A visible, ongoing indicator is introduced as part of this method not only to limit persistent tracking, but to reinforce user awareness in real-time whenever non-system applications access location information, thus reinforcing user awareness.

In response, developers must reevaluate the manner in which their applications collect data, ensuring that location requests are proportionate to functional requirements. The changes reflect a wider architectural shift towards contextual permissions, in which permissions are both purpose-bound and time-sensitive, thus reducing the risk of excessive or continuous data exposures, and thereby reducing the attack surface. As well as ensuring that platform and advertising security is protected, Google has also stepped up efforts to combat deceptive web behavior that undermines user trust and navigational integrity. 

A new spam enforcement framework from the company has classified "back button hijacking" as a malicious practice targeted at websites that manipulate browser behavior by intercepting and rerouting users to a different website. There is increasing evidence that this technique is increasingly occurring across ad-driven and low-trust domains. In addition to disrupting a fundamental browsing function, forced pathways often surface unsolicited content, advertisements, or unrelated destinations. 

In Google's view, this represents a critical mismatch between user intent and actual site behavior, which undermines both user confidence and the search experience as a whole. A site found engaging in such practices may be subject to a variety of enforcement actions, including algorithmic demotion to manual penalties, negatively impacting their visibility in search results and, as a consequence, their organic traffic flow. 

A transition period has been provided to publishers before enforcement commences on June 15, 2026, during which time scripts or design patterns that interfere with standard browser navigation or alter session history in untransparent ways can be audited and remedied. It is clear from this move that Google's ranking philosophy is continuing to shift toward prioritized, user-aligned interactions, with manipulative redirects, forced navigation loops, and intrusive ad behaviors being treated as systemic risks instead of isolated infractions. 

Google is further enhancing its defensive posture by leveraging artificial intelligence to counter increasingly sophisticated forms of malvertising, with its Gemini model playing a pivotal role in this process. By incorporating behavioral signals and contextual intent into the model, we will be able to identify deceptive advertising patterns earlier, preemptively block malicious campaigns, and detect fraud at scale. This model goes beyond traditional rule-based and keyword-based detection systems. 

Operational outcomes reflect this shift toward anticipatory enforcement, which has resulted in the interception of nearly 99% of harmful advertisements before reaching users. In addition to removing hundreds of millions of scam-linked ads and suspending millions of associated advertiser accounts, the company also restricted billions more accounts for non-compliance with policies. This research illustrates a broader industry challenge, in which threat actors are utilizing generative artificial intelligence in order to create highly convincing fraud campaigns, which necessitates an increasing reliance on advanced artificial intelligence systems as a primary means of defense. 

As part of its efforts to reduce fraud risks within its developer and business ecosystem, Google has also implemented structural safeguards. Through the implementation of a secure app ownership transfer mechanism within the Play Console, the Play Console attempts to address vulnerabilities related to informal or unauthorized account transitions, including risks associated with account takeovers, illicit marketplace activity, and credential misuse. 

Organizations will be required to adopt this standardized transfer process starting in May 2026, increasing the traceability and operational accountability associated with changes in application ownership. The confluence of these developments suggests that enterprises operating within Google's ecosystem are recalibrating their cybersecurity priorities. 

A convergence of increased privacy enforcement, a constantly evolving threat landscape driven by artificial intelligence, and better platform-level controls are redefining the very definition of security. Organizations are required to align application design with stricter data governance requirements to mitigate emerging risks across both the user-facing and operational layers by implementing internal security controls, monitoring capabilities, and governance frameworks. 

A broader consequence of the growing sophistication of enforcement mechanisms as well as the increasing granularity of platform controls for organizations is the necessity of sustained adaptability. It is not enough for security to be considered a reactive function. It must be integrated into development lifecycles, data governance models, and digital operations from the very beginning. 

It will be imperative to align with evolving platform policies, invest in threat intelligence, and maintain continuous visibility across application and advertising channels in order to minimize exposure to threats. As security challenges become increasingly automated and scaled, resilience will be dependent upon being able to anticipate, integrate, and respond to them within a unified operational strategy rather than on isolated controls.

AI Tools Make Phishing Attacks Harder to Detect, Survey Warns


 

Despite the ever-evolving landscape of cyber threats, the phishing method remains the leading avenue for data breaches in the years to come. However, in 2025, the phishing method has undergone a dangerous transformation. 

What used to be a crude attempt to deceive has now evolved into an extremely sophisticated operation backed by artificial intelligence, transforming once into an espionage. Traditionally, malicious actors are using poorly worded, grammatically incorrect, and inaccurate messages to spread their malicious messages; now, however, they are deploying systems based on generative AI, such as GPT-4 and its successors, to craft emails that are eerily authentic, contextually aware, and meticulously tailored to each target.

Cybercriminals are increasingly using artificial intelligence to orchestrate highly targeted phishing campaigns, creating communications that look like legitimate correspondence with near-perfect precision, which has been sounded alarming by the U.S. Federal Bureau of Investigation. According to FBI Special Agent Robert Tripp, these tactics can result in a devastating financial loss, a damaged reputation, or even a compromise of sensitive data. 

By the end of 2024, the rise of artificial intelligence-driven phishing had become no longer just another subtle trend, but a real reality that no one could deny. According to cybersecurity analysts, phishing activity has increased by 1,265 percent over the last three years, as a direct result of the adoption of generative AI tools. In their view, traditional email filters and security protocols, which were once effective against conventional scams, are increasingly being outmanoeuvred by AI-enhanced deceptions. 

Artificial intelligence-generated phishing has been elevated to become the most dominant email-borne threat of 2025, eclipsing even ransomware and insider risks because of its sophistication and scale. There is no doubt that organisations throughout the world are facing a fundamental change in how digital defence works, which means that complacency is not an option. 

Artificial intelligence has fundamentally altered the anatomy of phishing, transforming it from a scattershot strategy to an alarmingly precise and comprehensive threat. According to experts, adversaries now exploit artificial intelligence to amplify their scale, sophistication, and success rates by utilising AI, rather than just automating attacks.

As AI has enabled criminals to create messages that mimic human tone, context, and intent, the line between legitimate communication and deception is increasingly blurred. The cybersecurity analyst emphasises that to survive in this evolving world, security teams and decision-makers need to maintain constant vigilance, urging them to include AI-awareness in workforce training and defensive strategies. This new threat is manifested in the increased frequency of polymorphic phishing attacks. It is becoming increasingly difficult for users to detect phishing emails due to their enhanced AI automation capabilities. 

By automating the process of creating phishing emails, attackers are able to generate thousands of variants, each with slight changes to the subject line, sender details, or message structure. In the year 2024, according to recent research, 76 per cent of phishing attacks had at least one polymorphic trait, and more than half of them originated from compromised accounts, and about a quarter relied on fraudulent domains. 

Acanto alters URLs in real time and resends modified messages in real time if initial attempts fail to stimulate engagement, making such attacks even more complicated. AI-enhanced schemes can be extremely adaptable, which makes traditional security filters and static defences insufficient when they are compared to these schemes. Thus, organisations must evolve their security countermeasures to keep up with this rapidly evolving threat landscape. 

An alarming reality has been revealed in a recent global survey: the majority of individuals are still having difficulty distinguishing between phishing attempts generated by artificial intelligence and genuine messages.

According to a study by the Centre for Human Development, only 46 per cent of respondents correctly recognised a simulated phishing email crafted by artificial intelligence. The remaining 54 per cent either assumed it was real or acknowledged uncertainty about it, emphasising the effectiveness of artificial intelligence in impersonating legitimate communications now. 

Several age groups showed relatively consistent levels of awareness, with Gen Z (45%), millennials (47%), Generation X (46%) and baby boomers (46%) performing almost identically. In this era of artificial intelligence (AI) enhanced social engineering, it is crucial to note that no generation is more susceptible to being deceived than the others. 

While most of the participants acknowledged that artificial intelligence has become a tool for deceiving users online, the study demonstrated that awareness is not enough to prevent compromise, since the study found that awareness alone cannot prevent compromise. The same group was presented with a legitimate, human-written corporate email, and only 30 per cent of them correctly identified it as authentic. This is a sign that digital trust is slipping and that people are relying on instinct rather than factual evidence. 

The study was conducted by Talker Research as part of the Global State of Authentication Survey for Yubico, conducted on behalf of Yubico. During Cybersecurity Awareness Month this October, Talker Research collected insights from users throughout the U.S., the U.K., Australia, India, Japan, Singapore, France, Germany, and Sweden in order to gather insights from users across those regions. 

As a result of the findings, it is clear that users are vulnerable to increasingly artificial intelligence-driven threats. A survey conducted by the National Institute for Health found that nearly four in ten people (44%) had interacted with phishing messages within the past year by clicking links or opening attachments, and 1 per cent had done so within the past week. 

The younger generations seem to be more susceptible to phishing content, with Gen Z (62%) and millennials (51%) reporting significantly higher levels of engagement than the Gen X generation (33%) or the baby boom generation (23%). It continues to be email that is the most prevalent attack vector, accounting for 51 per cent of incidents, followed by text messages (27%) and social media messages (20%). 

There was a lot of discussion as to why people fell victim to these messages, with many citing their convincing nature and their similarities to genuine corporate correspondence, demonstrating that even the most technologically advanced individuals struggle to keep up with the sophistication of artificial intelligence-driven deception.

Although AI-driven scams are becoming increasingly sophisticated, cybersecurity experts point out that families do not have to give up on protecting themselves. It is important to take some simple, proactive actions to prevent risk from occurring. Experts advise that if any unexpected or alarming messages are received, you should pause before responding and verify the source by calling back from a trusted number, rather than the number you receive in the communication. 

Family "safe words" can also help confirm authenticity during times of emergency and help prevent emotional manipulation when needed. In addition, individuals can be more aware of red flags, such as urgent demands for action, pressure to share personal information, or inconsistencies in tone and detail, in order to identify deception better. 

Additionally, businesses must be aware of emerging threats like deepfakes, which are often indicated by subtle signs like mismatched audio, unnatural facial movements, or inconsistent visual details. Technology can play a crucial role in ensuring that digital security is well-maintained as well as fortified. 

It is a fact that Bitdefender offers a comprehensive approach to family protection by detecting and blocking fraudulent content before it gets to users by using a multi-layered security suite. Through email scam detection, malicious link filtering, and artificial intelligence-driven tools like Bitdefender Scamio and Link Checker, the platform is able to protect users across a broad range of channels, all of which are used by scammers. 

It is for mobile users, especially users of Android phones, that Bitdefender has integrated a number of call-blocking features within its application. These capabilities provide an additional layer of defence against attacks such as robocalls and impersonation schemes, which are frequently used by fraudsters targeting American homes. 

In Bitdefender's family plans, users have the chance to secure all their devices under a unified umbrella, combining privacy, identity monitoring, and scam prevention into a seamless, easily manageable solution in a seamless manner. As people move into an era where digital deception has become increasingly human-like, effective security is about much more than just blocking malware. 

It's about preserving trust across all interactions, no matter what. In the future, as artificial intelligence continues to influence phishing, it will become increasingly difficult for people to distinguish between the deception of phishing and its own authenticity of the phishing, which will require a shift from reactive defence to proactive digital resilience. 

The experts stress that not only advanced technology, but also a culture of continuous awareness, is needed to fight AI-driven social engineering. Employees need to be educated regularly about security issues that mirror real-world situations, so they can become more aware of potential phishing attacks before they click on them. As well, individuals should utilise multi-factor authentication, password managers and verified communication channels to safeguard both personal and professional information. 

On a broader level, government, cybersecurity vendors, and digital platforms must collaborate in order to create a shared framework that allows them to identify and report AI-enhanced scams as soon as they occur in order to prevent them from spreading.

Even though AI has certainly enhanced the arsenal of cybercriminals, it has also demonstrated the ability of AI to strengthen defence systems—such as adaptive threat intelligence, behavioural analytics, and automated response systems—as well. People must remain vigilant, educated, and innovative in this new digital battleground. 

There is no doubt that the challenge people face is to seize the potential of AI not to deceive people, but to protect them instead-and to leverage the power of digital trust to make our security systems of tomorrow even more powerful.

Ransomware Payments Plummet in 2024 Despite Surge in Cyberattacks

 

The past year witnessed a series of devastating ransomware attacks that disrupted critical sectors. Cyber extortion groups targeted Change Healthcare, crippling hundreds of US pharmacies and clinics, exploited security loopholes in Snowflake's customer accounts to infiltrate high-profile targets, and secured a record-breaking $75 million from a single victim.

Despite these high-profile incidents, data reveals an unexpected trend: overall ransomware payments declined in 2024, with the second half of the year experiencing the steepest drop ever recorded. A report by cryptocurrency analytics firm Chainalysis shows that ransomware payments totaled $814 million in 2024, marking a 35% decrease from the record $1.25 billion paid in 2023. The decline became more pronounced between July and December, when hackers collected only $321 million, compared to $492 million in the first half of the year—representing the largest six-month reduction in ransomware payments observed by Chainalysis.

“The drastic reversal of the trends we were seeing in the first half of the year to the second was quite surprising,” says Jackie Burns Koven, head of cyber threat intelligence at Chainalysis. She attributes this shift to law enforcement takedowns and disruptions, some of which had delayed effects as organizations grappled with major breaches.

Significant law enforcement actions in late 2023 and early 2024 targeted major ransomware groups. Just before Christmas in 2023, the FBI exploited vulnerabilities in BlackCat (AlphV)'s encryption software, distributed decryption keys to victims, and dismantled the group’s dark-web infrastructure. In February 2024, the UK's National Crime Agency (NCA) struck a major blow against Lockbit, seizing its cryptocurrency wallets and exposing its cybercriminal network.

Initially, both groups appeared to recover. AlphV orchestrated a major attack on Change Healthcare, disrupting payments at US pharmacies and extorting $22 million. Lockbit quickly reestablished its operations through a new dark-web platform. However, law enforcement actions had deeper consequences than initially apparent. AlphV executed an “exit scam,” disappearing with the ransom and leaving its hacker affiliates empty-handed. Lockbit’s operations also diminished following the NCA’s crackdown, with distrust growing in cybercriminal circles after authorities identified its alleged leader, Dmitry Khoroshev. In May 2024, the US Treasury imposed sanctions on Khoroshev, complicating ransom payments to the group.

New Ransomware Gangs Struggle to Match Predecessors

While emerging ransomware groups attempted to fill the void left by these takedowns, many lacked the sophistication to target high-value victims. “Their talent is not quite as robust as their predecessors,” notes Burns Koven. As a result, ransom demands shrank, often amounting to tens of thousands rather than millions of dollars.

Although 2024 saw an increase in ransomware attacks—4,634 incidents compared to 4,400 in 2023—lower ransom payouts suggest that newer cybercriminals prioritized volume over impact. “What we're seeing in terms of payments is a reflection of newer threat actors being attracted by the amount of money that they see you can make in ransomware, trying to get into the game and not being very good at it,” says Allan Liska, a threat intelligence analyst at Recorded Future.

Stronger Cyber Defenses and Cryptocurrency Regulations

Beyond law enforcement interventions, the decline in payments is also linked to heightened awareness and improved cybersecurity measures. Governments and institutions have implemented stronger ransomware response strategies, while increased cryptocurrency regulation and crackdowns on illicit financial channels have complicated ransomware payments. Authorities have particularly targeted crypto mixers, tools used by cybercriminals to anonymize transactions.

Despite the downward trend in payments, historical data suggests that ransomware remains cyclical. In 2022, total payments fell to $655 million, down from $1.07 billion in 2021, only to surge again in 2023 to $1.25 billion. Experts caution against interpreting short-term declines as long-term victories. “If the baddies had a couple of brilliant quarters, a dip will follow, same as if the goodies had some good quarters,” says Brett Callow, managing director at FTI Consulting. “That’s why we really need to analyze trends over a longer period.”

Additionally, the true scale of ransomware payments remains difficult to quantify, as cybercriminals often inflate their success and many victims choose not to report attacks due to stigma or regulatory concerns.

Chainalysis researchers emphasize that the decline in ransomware payments should not be mistaken for a lasting solution. “We're still standing in the rubble, right? We can't go tell everyone, everything's great, we solved ransomware—they’re continuing to go after schools, after hospitals and critical infrastructure,” says Burns Koven. However, the data does serve as an important indicator that sustained investment in ransomware defense is yielding results.

Navigating 2025: Emerging Security Trends and AI Challenges for CISOs

 

Security teams have always needed to adapt to change, but 2025 is poised to bring unique challenges, driven by advancements in artificial intelligence (AI), sophisticated cyber threats, and evolving regulatory mandates. Chief Information Security Officers (CISOs) face a rapidly shifting landscape that requires innovative strategies to mitigate risks and ensure compliance.

The integration of AI-enabled features into products is accelerating, with large language models (LLMs) introducing new vulnerabilities that attackers may exploit. As vendors increasingly rely on these foundational models, CISOs must evaluate their organization’s exposure and implement measures to counter potential threats. 

"The dynamic landscape of cybersecurity regulations, particularly in regions like the European Union and California, demands enhanced collaboration between security and legal teams to ensure compliance and mitigate risks," experts note. Balancing these regulatory requirements with emerging security challenges will be crucial for protecting enterprises.

Generative AI (GenAI), while presenting security risks, also offers opportunities to strengthen software development processes. By automating vulnerability detection and bridging the gap between developers and security teams, AI can improve efficiency and bolster security frameworks.

Trends to Watch in 2025

1. Vulnerabilities in Proprietary LLMs Could Lead to Major Security Incidents

Software vendors are rapidly adopting AI-enabled features, often leveraging proprietary LLMs. However, these models introduce a new attack vector. Proprietary models reveal little about their internal guardrails or origins, making them challenging for security professionals to manage. Vulnerabilities in these models could have cascading effects, potentially disrupting the software ecosystem at scale.

2. Cloud-Native Workloads and AI Demand Adaptive Identity Management

The rise of cloud-native applications and AI-driven systems is reshaping identity management. Traditional, static access control systems must evolve to handle the surge in service-based identities. Adaptive frameworks are essential for ensuring secure and efficient access in dynamic digital environments.

3. AI Enhances Security in DevOps

A growing number of developers—58% according to recent surveys—recognize their role in application security. However, the demand for skilled security professionals in DevOps remains unmet.

AI is bridging this gap by automating repetitive tasks, offering smart coding recommendations, and integrating security into development pipelines. Authentication processes are also being streamlined, with AI dynamically assigning roles and permissions as services deploy across cloud environments. This integration enhances collaboration between developers and security teams while reducing risks.

CISOs must acknowledge the dual-edged nature of AI: while it introduces new risks, it also offers powerful tools to counter cyber threats. By leveraging AI to automate tasks, detect vulnerabilities, and respond to threats in real-time, organizations can strengthen their defenses and adapt to an evolving threat landscape.

The convergence of technology and security in 2025 calls for strategic innovation, enabling enterprises to not only meet compliance requirements but also proactively address emerging risks.


Cyberattacks and Technology Disruptions: Leading Threats to Business Growth

 

The global average cost of a data breach soared to nearly $4.9 million in 2024, marking a 10% increase compared to the previous year, according to a report by IBM.

In late October, UnitedHealth disclosed that a significant cyberattack on its Change Healthcare subsidiary earlier in 2024 might have exposed the data of 100 million individuals. This incident is regarded as the largest healthcare data breach ever reported to federal regulators, as first reported by Healthcare Dive.

Earlier that month, the company revealed the breach had led to a financial impact of $2.5 billion over the nine months ending September 30, including $1.7 billion in direct response costs. Additionally, the business disruption caused by the attack was estimated at $705 million.

“We continue to work with customers to bring transaction volumes back to pre-event levels and to win new business with our now more modern, secure, and capable offerings,” UnitedHealth CFO John Rex stated during an earnings call. “We expect to continue to build back the business to pre-attack levels over the course of ’25 and estimate next year’s full year impact will be roughly half of the ’24 level.”

Other major companies like AT&T, Live Nation Entertainment (the owner of Ticketmaster), and Dell also reported significant data breaches in 2024.

Chubb's research highlighted that 40% of executives identified cyber breaches and data leaks as the most disruptive and financially challenging man-made threats.

The study also found that 86% of businesses either have or plan to implement business interruption coverage for risks such as cyberattacks, natural disasters, or supply chain disruptions. Of these, 53% already have coverage, while another third intend to add it within the next year.

Monitoring cyber incidents has become the most widely used tool for mitigating risks.

“Corporate leaders must take a holistic approach to simultaneously mitigate both new and old business risks effectively,” the report emphasized. “They must also develop the ability to monitor and mitigate all these risks around the clock to ensure they are effectively protected.”

The findings are based on a survey of 517 executives from various industries across the U.S. and Canada.

Cyber Attacks by North Korean Hackers on Cryptocurrency Platforms Reach $1 Billion in 2023

 

A recent study by Chainalysis, a blockchain analytics firm, has revealed a surge in cyber attacks on cryptocurrency platforms linked to North Korea. The data, covering the period from 2016 to 2023, indicates that 20 crypto platforms were targeted by North Korean hackers in 2023 alone, marking the highest level in the recorded period.

According to the report, North Korean hackers managed to steal just over $1 billion in crypto assets in the past year. While this amount is slightly less than the record $1.7 billion stolen in 2022, the increasing trend is a cause for concern among cybersecurity experts.

Chainalysis highlighted the growing threat from cyber-espionage groups like Kimsuky and Lazarus Group, employing various malicious tactics to accumulate significant amounts of crypto assets. This aligns with the Federal Bureau of Investigation's (FBI) previous attribution of a $100 million crypto heist on the Horizon Bridge in 2022 to North Korea-linked hackers.

Supporting these findings, TRM Labs, a blockchain intelligence firm, reported that North Korea-affiliated hackers stole at least $600 million in crypto assets in 2023. The frequency and success of these attacks underscore the sophistication and persistence of North Korea's cyber capabilities.

The report cited a notable incident in September, where the FBI confirmed that North Korea's Lazarus Group was responsible for stealing around $41 million in crypto assets from the online casino and betting platform Stake.com. Investigations led to the U.S. Department of the Treasury's Office of Foreign Assets Control (OFAC) sanctioning Sinbad.io, a virtual currency mixer identified as a key money-laundering tool for Lazarus Group.

Global efforts to counter the threat include sanctions, particularly as previous research indicated that North Korea-affiliated hackers used stolen crypto funds to finance nuclear weapons programs. The UN has imposed sanctions to limit the regime's access to funding sources supporting its nuclear activities.

TRM Labs emphasized the need for ongoing vigilance and innovation from businesses and governments, stating, "With nearly $1.5 billion stolen in the past two years alone, North Korea’s hacking prowess demands continuous vigilance and innovation from business and governments."

Despite advancements in cybersecurity and increased international collaboration, the report predicts that 2024 is likely to see further disruptions from North Korea, posing a challenge for the global community to strengthen defenses against the relentless digital attacks. The report was released by CNBC.

Report: September Sees Record Ransomware Attacks Surge

 

In September, a notable surge in ransomware attacks was recorded, as revealed by NCC Group's September Threat Pulse. Leak sites disclosed details of 514 victims, marking a significant 153% increase compared to the same period last year. This figure surpassed the previous high set in July 2023 at 502 attacks.

Among the fresh wave of threat actors, LostTrust emerged as the second most active group, accounting for 10% of all attacks with a total of 53. Another newcomer, RansomedVC, secured the fourth spot with 44 attacks, making up 9% of the total. LostTrust, believed to have formed in March of the same year, mirrors established threat actors' tactics of employing double extortion.

Notably, well-established threat actors remained active in September. Lockbit maintained its lead from August, while Clop's activity diminished, responsible for only three ransomware attacks in September.

In line with previous trends, North America remained the primary target for ransomware attacks, experiencing 258 incidents in September.

Europe followed as the second most targeted region with 155 attacks, trailed by Asia with 47. Nevertheless, there was a 3% rise in attacks on North America and a 2% increase on Europe, while Asia saw a 6% decrease from the previous month. This indicates a shifting focus of threat actors towards Western regions.

Industrials continued to bear the brunt of attacks, comprising 40% (19) of the total, followed by Consumer Cyclicals at 21% (10), and Healthcare at 15% (7). The sustained focus on Industrials is unsurprising, given the allure of Personally Identifiable Information (PII) and Intellectual Property (IP) for threat actors. 

The Healthcare sector witnessed a notable surge, experiencing 18 attacks, marking an 86% increase from August. This trend aligns with patterns observed earlier in the year, suggesting that August's dip was an anomaly. The pharmaceutical industry's susceptibility to ransomware attacks continues due to the potential financial impact.

The surge in ransomware attacks can be attributed in part to the emergence of new threat actors, notably RansomedVC. Operating similarly to established organizations like 8Base, RansomedVC also functions as a penetration testing entity. 

However, their approach to extortion incorporates compliance with Europe's General Data Protection Regulation (GDPR), pledging to report any vulnerabilities discovered in the target's network. This unique approach intensifies pressure on victims to meet ransom demands, as GDPR allows for fines of up to 4% of a victim's annual global turnover.

RansomedVC garnered attention by claiming responsibility for the attack on Sony, a major Japanese electronics company, on September 24th. In this incident, RansomedVC compromised the company's systems and offered to sell stolen data. This successful targeting of a global giant like Sony highlights the significant impact RansomedVC is exerting, indicating its continued activity in the months ahead.

Matt Hull, Global Head of Threat Intelligence at NCC Group, commented on the situation, noting that the surge in attacks in September was somewhat anticipated for this time of year. However, what sets this apart is the sheer volume of these attacks and the emergence of new threat actors playing a major role in this surge. Groups like LostTrust, Cactus, and RansomedVC stand out for their adaptive techniques, putting extra pressure on victims. 

The adoption of the double extortion model and the embrace of Ransomware as a Service (Raas) by these new threat actors signify an evolving landscape in global ransomware attacks. Hull predicts that other groups may explore similar methods in the coming months to increase pressure on victims.