Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Aussie Telecom Breach Raises Alarm Over Customer Data Safety

 




A recent cyberattack on TPG Telecom has reignited concerns about how safe personal information really is in the hands of major companies. What the provider initially downplayed as a “limited” incident has in fact left hundreds of thousands of customers vulnerable to online scams.

The intrusion was uncovered on August 16, when unusual activity was detected in the systems of iiNet, one of TPG’s subsidiary brands. Hackers were able to get inside by misusing stolen employee logins, which granted access to iiNet’s order management platform. This internal tool is mainly used to handle service requests, but it contained far more sensitive data than many would expect.


Investigators now estimate that the attackers walked away with:

• Roughly 280,000 email addresses linked to iiNet accounts

• Close to 20,000 landline phone numbers

• Around 10,000 customer names, addresses, and contact details

• About 1,700 modem setup credentials


Although no banking details or government ID documents were exposed, cybersecurity experts caution that this type of information is highly valuable for criminals. Email addresses and phone numbers can be exploited to craft convincing phishing campaigns, while stolen modem passwords could give attackers the chance to install malware or hijack internet connections.

TPG has apologised for the breach and is reaching out directly to customers whose details were involved. Those not affected are also being notified for reassurance. So far, there have been no confirmed reports of the stolen records being used maliciously.

Even so, the risks are far from minor. Phishing messages that appear to come from trusted sources can lead victims to unknowingly share bank credentials, install harmful software, or hand over personal details that enable identity theft. As a result, affected customers are being urged to remain alert, treat incoming emails with suspicion, and update passwords wherever possible, especially on home modems.

The company has said it is cooperating with regulators and tightening its security protocols. But the case underlines a growing reality: personal data does not need to include credit card numbers to become a target. Seemingly routine details, when collected in bulk, can still provide criminals with the tools they need to run scams.

As cyberattacks grow more frequent, customers are left with the burden of vigilance, while companies face rising pressure to prove that “limited” breaches do not translate into large-scale risks.



Fake Netflix Job Offers Target Facebook Credentials in Real-Time Scam

 

A sophisticated phishing campaign is targeting job seekers with fake Netflix job offers designed to steal Facebook login credentials. The scam specifically focuses on marketing and social media professionals who may have access to corporate Facebook business accounts. 

Modus operandi 

The attack begins with highly convincing, AI-generated emails that appear to come from Netflix's HR team, personally tailored to recipients' professional backgrounds. When job seekers click the "Schedule Interview" link, they're directed to a fraudulent career site that closely mimics Netflix's official page. 

The fake site prompts users to create a "Career Profile" and offers options to log in with Facebook or email. However, regardless of the initial choice, victims are eventually directed to enter their Facebook credentials. This is where the scam becomes particularly dangerous. 

Real-time credential theft 

What makes this attack especially sophisticated is the use of websocket technology that allows scammers to intercept login details as they're being typed. As Malwarebytes researcher Pieter Arntz explains, "The phishers use a websocket method that allows them to intercept submissions live as they are entered. This allows them to try the credentials and if your password works, they can log into your real Facebook account within seconds". 

The attackers can immediately test stolen credentials on Facebook's actual platform and may even request multi-factor authentication codes if needed. If passwords don't work, they simply display a "wrong password" message to maintain the illusion. 

While personal Facebook accounts have value, the primary goal is accessing corporate social media accounts. Cybercriminals seek marketing managers and social media staff who control company Facebook Pages or business accounts. Once compromised, these accounts can be used to run malicious advertising campaigns at the company's expense, demand ransom payments, or leverage the organization's reputation for further scams.

Warning signs and protection

Security researchers have identified several suspicious email domains associated with this campaign, including addresses ending with @netflixworkplaceefficiencyhub.com, @netflixworkmotivation, and @netflixtalentnurture.com. The fake hiring site was identified as hiring.growwithusnetflix[.]com, though indicators suggest the operators cleared their tracks after the scam was exposed. 

Job seekers should be cautious of unsolicited job offers, verify website addresses carefully, and remember that legitimate Netflix recruitment doesn't require Facebook login credentials. The campaign demonstrates how scammers exploit both job market anxiety and the appeal of working for prestigious companies to execute sophisticated credential theft operations.

A Comprehensive Look at Twenty AI Assisted Coding Risks and Remedies


 

In recent decades, artificial intelligence has radically changed the way software is created, tested, and deployed, bringing about a significant shift in software development history. Originally, it was only a simple autocomplete function, but it has evolved into a sophisticated AI system capable of producing entire modules of code based on natural language inputs. 

The development industry has become more automated, resulting in the need for backend services, APIs, machine learning pipelines, and even complete user interfaces being able to be designed in a fraction of the time it used to take. Across a range of industries, the culture of development is being transformed by this acceleration. 

Teams at startups and enterprises alike are now integrating Artificial Intelligence into their workflows to automate tasks once exclusively the domain of experienced engineers, thereby introducing a new way of delivering software. It has been through this rapid adoption that a culture has emerged known as "vibe coding," in which developers rely on AI tools to handle a large portion of the development process instead of using them merely as a tool to assist with a few small tasks.

Rather than manually debugging or rethinking system design, they request the AI to come up with corrections, enhancements, or entirely new features rather than manually debugging the code. There is an attractiveness to this trend, in particular for solo developers and non-technical founders who are eager to turn their ideas into products at unprecedented speed.

There is a great deal of enthusiasm in communities such as Hacker News and Indie Hackers, with many claiming that artificial intelligence is the key to levelling the playing field in technology. With limited resources and technical knowledge, prototyping, minimum viable products, and lightweight applications have become possible in record time. 

As much as enthusiasm fuels innovation at the grassroots, it is very different at large companies and critical sectors, where the picture is quite different. Finance, healthcare, and government services are all subject to strict compliance and regulation frameworks requiring stability, security, and long-term maintainability, which are all non-negotiable. 

AI in code generation presents several complex risks that go far beyond enhancing productivity for these organisations. Using third-party artificial intelligence services raises a number of concerns, including concerns about intellectual property, data privacy, and software provenance. In sectors such as those where the loss of millions of dollars, regulatory penalties, or even threats to public safety could result from a single coding error, adopting AI-driven development has to be handled with extreme caution. This tension between speed and security is what makes AI-aided coding so challenging. 

The benefits are undeniable on the one hand: faster iteration, reduced workloads, faster launches, and potential cost reductions are undeniable. However, the hidden dangers of overreliance are becoming more apparent as time goes on. Consequently, developers are likely to lose touch with the fundamentals of software engineering and accept solutions produced by artificial intelligence that they are not entirely familiar with. This can lead to code that appears to work on the surface, but has subtle flaws, inefficiencies, or vulnerabilities that only become apparent under pressure. 

As systems scale, these small flaws can ripple outward, resulting in a state of systemic fragility. Such oversights are often catastrophic for mission-critical environments. The risks associated with the use of artificial intelligence-assisted coding range greatly, and they are highly unpredictable. 

A number of the most pressing issues arise from hidden logic flaws that may go undetected until unusual inputs stress a system; excessive permissions that are embedded in generated code that may inadvertently widen attack surfaces; and opaque provenances arising from AI systems that have been trained on vast, unverified repositories of public code that have been unverified. 

The security vulnerabilities that AI often generates are also a source of concern, as AI often generates weak cryptography practices, improper input validation, and even hardcoded credentials. The risks associated with this flaw, if deployed to production, include the potential for cybercriminals to exploit the system. 

Furthermore, compliance violations may also occur as a result of these flaws. In many organisations, licensing and regulatory obligations must be adhered to; however, AI-generated output may contain restricted or unlicensed code without the companies' knowledge. In the process, companies can face legal disputes as well as penalties for inappropriately utilising AI. 

On the other hand, overreliance on AI risks diminishing human expertise. Junior developers may become more accustomed to outsourcing their thinking to AI tools rather than learning foundational problem-solving skills. The loss of critical competencies on a team may lead to long-term resilience if teams, over time, are not able to maintain critical competencies. 

As a consequence of these issues, it is unclear whether the organisation, the developer or the AI vendor is held responsible for any breaches or failures caused by AI-generated code. According to industry reports, these concerns need to be addressed immediately. There is a growing body of research that suggests that more than half of organisations experimenting with AI-assisted coding have encountered security issues as a result of the use of such software. 

Although the risks are not just theoretical, but are already present in real-life situations, as adoption continues to ramp up, the industry should move quickly to develop safeguards, standards, and governance frameworks that will protect against these emerging threats. A comprehensive mitigation strategy is being developed, but the success of such a strategy is dependent on a disciplined and holistic approach. 

AI-generated code should be subjected to the same rigorous review processes as contributions from junior developers, including peer reviews, testing, and detailed documentation. A security tool should be integrated into the development pipeline so that vulnerabilities can be scanned for, as well as compliance policies enforced. 

In addition to technical safeguards, there are cultural and educational initiatives that are crucial, and these systems ensure traceability and accountability for every line of code. Additionally, organisations are adopting provenance tracking systems which log AI contributions, thereby ensuring traceability and accountability. As developers, it is imperative that AI is not treated as an infallible authority, but rather as an assistant that should be scrutinised regularly. 

Instead of replacing one with the other, the goal should be to combine the efficiency of artificial intelligence with the judgment and creativity of human engineers. Governance frameworks will play a similarly important role in achieving this goal. Organisational rules for compliance and security are increasingly being integrated directly into automated workflows as part of policies-as-code approaches. 

When enterprises employ artificial intelligence across a wide range of teams and environments, they can maintain consistency while using artificial intelligence. As a secondary layer of defence, red teaming exercises, in which security professionals deliberately stress-test artificial intelligence-generated systems, provide a way for malicious actors to identify weaknesses that they are likely to exploit. 

Furthermore, regulators and vendors are working to clarify liability in cases where AI-generated code causes real-world harm. A broad discussion of legal responsibility needs to continue in the meantime. As AI's role in software development grows, we can expect it to play a much bigger role in the future. The question is no longer whether or not organisations are going to use AI, but rather how they are going to integrate it effectively. 

A startup can move quickly by embracing it, whereas an enterprise must balance innovation with compliance and risk management. As such, those who succeed in this new world will be those who create guardrails in advance and invest in both technology and culture to make sure that efficiency doesn't come at the expense of trust or resilience. As a result, there will not be a sole focus on machines in the future of software development. 

The coding process will be shaped by the combination of human expertise and artificial intelligence. AI may be capable of speeding up the mechanics of coding, but the responsibility of accountability, craftsmanship, and responsibility will remain human in nature. As a result, organizations with the most forward-looking mindset will recognize this balance by utilizing AI to drive innovation, but maintaining the discipline necessary to protect their systems, customers, and reputations while maintaining a focus on maintaining discipline. 

A true test of trust for the next generation of technology will not come from a battle between man and machine, but from the ability of both to work together to build secure, sustainable, and trustworthy technologies for a better, safer world.

Cybercriminals Harness AI and Automation, Leaving Southeast Asia Exposed

 

A new study warns that cybercriminals are leveraging artificial intelligence (AI) and automation to strike faster and with greater precision, exposing critical weaknesses in Southeast Asia—a region marked by rapid digital growth and interconnected supply chains. The findings urge businesses to treat cybersecurity as the foundation of digital trust and organizational resilience.

The report highlights a significant surge in sophisticated, multi-layered attacks targeting global enterprises, with Southeast Asia among the most vulnerable. Nearly 70% of breaches involved attackers using at least three entry points simultaneously—ranging from web browsers and cloud applications to networks and human behavior. Alarmingly, 44% of these incidents began with browser-based exploits, taking advantage of everyday workplace tools like file-sharing services and collaboration platforms. Researchers caution that disconnected and siloed security solutions cannot keep pace with attackers who seamlessly move across fragmented IT environments. To counter this, organizations must implement integrated, real-time protection across cloud, endpoint, identity, and network layers.

Phishing has returned as the top method of unauthorized access, responsible for 23% of incidents in 2024. What sets this new wave apart is the use of generative AI, allowing cybercriminals to create convincing phishing campaigns that mimic professional communication styles, workflows, and even individual employee voices. Experts emphasize that traditional once-a-year security training is no longer sufficient. Instead, businesses must adopt continuous, behavior-based awareness programs alongside AI-driven detection tools that monitor anomalies across emails, messaging platforms, and user activities. The goal is to create a dynamic “human firewall” where people and machines work in tandem against evolving threats.

The study also reveals a troubling rise in insider-driven breaches, which tripled in 2024. Nation-state groups—most notably from North Korea—successfully infiltrated companies by posing as job applicants, even using deepfake video interviews convincing enough to secure technical roles and gain insider access. Traditional security measures often fail against attackers disguised as legitimate users. To address this, experts recommend adopting zero-trust frameworks that enforce least-privilege access, continuous verification, and ongoing behavioral monitoring. The report stresses that “trust cannot be assumed; it must be continuously validated.”

Perhaps the most alarming discovery is the accelerated pace of cyber incidents. Data theft, which once took days, now unfolds within hours—sometimes less than one. In 2024, one in four breaches involved data exfiltration within five hours of initial compromise, with some completed in under an hour. Automation and AI have drastically shortened the attacker’s kill chain. The only effective defense, the report notes, is speed: leveraging automated triage, unified threat intelligence, and AI-powered response mechanisms to prevent security teams from lagging behind.

For ASEAN economies—where cloud adoption, cross-border data sharing, and sprawling supply chains intersect—the risks are especially high. The report urges regional leaders to view cybersecurity as a strategic priority, directly linked to resilience and long-term trust. “The most damaging breaches stem from too much complexity, too little visibility, and too much trust,” the report concludes. By embedding security from code to cloud, simplifying operations through automation, and embracing threat-informed strategies, Southeast Asian businesses can turn vulnerabilities into resilience

Data Portability and Sovereign Clouds: Building Resilience in a Globalized Landscape

 

The emergence of sovereign clouds has become increasingly inevitable as organizations face mounting regulatory demands and geopolitical pressures that influence where their data must be stored. Localized cloud environments are gaining importance, ensuring that enterprises keep sensitive information within specific jurisdictions to comply with legal frameworks and reduce risks. However, the success of sovereign clouds hinges on data portability, the ability to transfer information smoothly across systems and locations, which is essential for compliance and long-term resilience.  

Many businesses cannot afford to wait for regulators to impose requirements; they need to proactively adapt. Yet, the reality is that migrating data across hybrid environments remains complex. Beyond shifting primary data, organizations must also secure related datasets such as backups and information used in AI-driven applications. While some companies focus on safeguarding large language model training datasets, others are turning to methods like retrieval-augmented generation (RAG) or AI agents, which allow them to leverage proprietary data intelligence without creating models from scratch. 

Regardless of the approach, data sovereignty is crucial, but the foundation must always be strong data resilience. Global regulators are shaping the way enterprises view data. The European Union, for example, has taken a strict stance through the General Data Protection Regulation (GDPR), which enforces data sovereignty by applying the laws of the country where data is stored or processed. Additional frameworks such as NIS2 and DORA further emphasize the importance of risk management and oversight, particularly when third-party providers handle sensitive information.

Governments and enterprises alike are concerned about data moving across borders, which has made sovereign cloud adoption a priority for safeguarding critical assets. Some governments are going a step further by reducing reliance on foreign-owned data center infrastructure and reinvesting in domestic cloud capabilities. This shift ensures that highly sensitive data remains protected under national laws. Still, sovereignty alone is not a complete solution. 

Even if organizations can specify where their data is stored, there is no absolute guarantee of permanence, and related datasets like backups or AI training files must be carefully considered. Data portability becomes essential to maintaining sovereignty while avoiding operational bottlenecks. Hybrid cloud adoption offers flexibility, but it also introduces complexity. Larger enterprises may need multiple sovereign clouds across regions, each governed by unique data protection regulations. 

While this improves resilience, it also raises the risk of data fragmentation. To succeed, organizations must embed data portability within their strategies, ensuring seamless transfer across platforms and providers. Without this, the move toward sovereign or hybrid clouds could stall. SaaS and DRaaS providers can support the process, but businesses cannot entirely outsource responsibility. Active planning, oversight, and resilience-building measures such as compliance audits and multi-supplier strategies are essential. 

By clearly mapping where data resides and how it flows, organizations can strengthen sovereignty while enabling agility. As data globalization accelerates, sovereignty and portability are becoming inseparable priorities. Enterprises that proactively address these challenges will be better positioned to adapt to future regulations while maintaining flexibility, security, and long-term operational strength in an increasingly uncertain global landscape.

Cyberattack on New York Business Council Exposes Thousands to Risk



The Business Council of New York State (BCNYS), an influential body representing businesses and professional groups, has confirmed that a recent cyberattack compromised the personal information of more than 47,000 people.

In a report submitted to the Office of the Maine Attorney General, the Council disclosed that attackers accessed a wide range of sensitive data. The files included basic identifiers such as names and dates of birth, along with highly confidential records like Social Security numbers, state-issued IDs, and taxpayer identification numbers. Financial data was also exposed, including bank account details, payment card numbers, PINs, expiration dates, and even electronic signatures.

What makes this breach particularly concerning is the theft of medical records. The stolen information included healthcare providers’ names, diagnostic details, treatment histories, prescription data, and insurance documents, material that is often harder to replace or protect than financial information.

Investigators believe the attack took place in late February 2025, but the Council only uncovered it months later in August. The delay meant that for several months, criminals could have had access to the stolen records without detection. So far, officials have not confirmed any cases of identity theft linked to this incident. However, security experts note that breaches of this scale often have long-term consequences, as stolen data may circulate for years before being used.


Why it matters

The mix of financial, medical, and personal details gives criminals a powerful toolkit. With such data, they can open fraudulent credit lines, make unauthorized purchases, or submit false tax returns. Medical information raises another layer of danger — allowing fraudsters to access health services or prescriptions under someone else’s identity, potentially leaving victims to untangle costly disputes with insurers and providers.


Protective steps for those affected

1. Secure credit and banking accounts: Victims are advised to place fraud alerts or credit freezes with major credit bureaus, closely watch account activity, and notify banks of potential exposure.

2. Strengthen account security: Change passwords, use multifactor authentication wherever possible, and avoid reusing old login details.

3. Guard against tax fraud: Apply for an IRS Identity Protection PIN, which blocks others from filing tax returns in your name.

4. Monitor medical use: Review insurance and healthcare statements for unfamiliar claims or treatments, and flag suspicious activity immediately.


While BCNYS has offered free credit monitoring to those affected, the larger lesson extends far beyond this single breach. For organizations, it is a reminder that delayed detection amplifies the damage of any cyberattack. For individuals, it shows how deeply personal data, financial and medical can be intertwined in ways that make recovery especially difficult.

Cybersecurity experts warn that these breaches are no longer isolated events but part of a larger pattern where institutions become targets precisely because they store such valuable data. The question is no longer if data will be stolen, but how quickly victims can respond and how effectively organizations can limit the fallout.



Hackers Disclose Why They Targeted North Korean Government Hackers


 

In a stunning development in the history of cybersecurity, independent hackers managed to successfully break into the system of a North Korean government hacker, enabling them to expose the inner workings of one of the country's most secretive cyber units. 

On August 12, 2025, a shocking breach was disclosed in the cybersecurity community, which sent shockwaves throughout the cybersecurity community and sparked an ongoing debate about how independent actors can counter state-sponsored espionage, which has grown in recent years. Taking responsibility for the breach, the two hackers, who have chosen the pseudonyms Sabre and Cyb0rg, made the stolen data available online, claiming responsibility for the compromise. 

It is through their disclosure that researchers and investigators have been given a glimpse into the structure, tools, and strategies of the notorious North Korean cyber group known as Kimsuky, which has provided researchers and investigators with a unique perspective on the group. However, the hackers didn't just leak information; they also published a detailed account of their actions in Phrack, one of the leading cybersecurity magazines and hackers' publications. 

Using both the data dump and their narrative to present a rare, almost forensic portrait of Pyongyang's cyber espionage machine, these researchers have developed an almost forensic portrait of Pyongyang's cyber espionage apparatus. There have been many attempts to describe the breach as one of the most significant exposures to a nation-state hacker unit in recent history because of both its scale and its sensitive nature. It seems that the intrusion ended earlier than anticipated in 2025, according to accounts provided by Sabre and Cyb0rg. 

At first glance, the compromised computer appeared at first glance to be a typical target; however, once a closer look was taken, it became clear that this system was far from typical. There was later an identification that indicated that it was the possession of a hacker who was allegedly working on behalf of the North Korean government.

The duo knew that their discovery had significance. They took care to observe the system's contents and behaviour carefully before deciding to make the information public, recognising its significance. For almost four months, the duo maintained undetected access. According to the attackers, as part of their surveillance, they came across a wide range of sensitive materials that were used by the attackers, ranging from hacking tools and exploits to detailed infrastructure data that was a part of ongoing operations. 

Rather than selling or concealing the information, they framed their decision to divulge the breach as one of responsibility for themselves and the organisation. A recent interview published by Phrack revealed that Sabre asserts that state-sponsored hackers “deserve to be exposed” because they engage in illegal activities for all kinds of wrong reasons. In a sense, hackers were not criminals, but rather actors who were trying to rebalance the cybersecurity landscape by shining a spotlight on the most dangerous and secretive members of the community. 

A public disclosure of the breach was made by the two hackers at the prestigious hacking conference DEF CON 33, which took place in Las Vegas in early August 2025. During the presentation, both hackers and cybersecurity professionals discussed in an open manner their findings with an audience of other hackers, researchers, and security professionals. Their report revealed that the target was connected to Kimsuky, an organisation widely associated with espionage and financial theft in North Korea, who were known to have conducted espionage and financial fraud. 

There are several compromised devices in the report, including a Linux laptop running Deepin 20.9 and a virtual private server that appears to have been used for phishing attempts. An 8.9 gigabyte archive of data was released along with the hackers' presentation, which is now hosted by the transparency collective Distributed Denial of Secrets (DDoSecrets), in association with the hacker presentation. Researchers have since found this dataset to be a goldmine, providing a detailed picture of Kimsuky's operations and technical capabilities in an unprecedented way. 

Taking a closer look at the leaked archive, it becomes clear that Kimsuky was an ambitious and technologically sophisticated group that had conducted a wide-ranging campaign against South Korean government and military organisations. Analysts have found evidence that the group had conducted such campaigns is unequivocally alarming, especially given the discovery of the complete source code of the Ministry of Foreign Affairs' "Kebi" e-mail service. 

The modules included webmail access, administrative controls, and archival functionality. These codes could be accessed by attackers, who could then use them to exploit vulnerabilities within the system, raising serious concerns for the security of South Korea. In addition to this, phishing logs within the archive revealed targeted attempts to compromise sensitive domains in South Korea. 

One of the most prominent of them was the Defence Counterintelligence Command (dcc.mil.kr), followed by the Ministry of Justice (spo.go.kr) and the central government portal, Korea.kr. In addition, Kimsuky's campaign also covered a wide variety of South Korea's most widely used email providers, including Daum, Kakao, and Naver, showing the breadth and depth of his marketing strategies. Kimsuky also had a full arsenal of tools, according to the leak. 

Researchers discovered live phishing kits, PHP scripts that generate convincing fake websites, Cobalt Strike loaders, as well as proxy modules that disguise malicious traffic, among other things. It appears that the cache contains several binary files that have yet to be identified by existing malware databases, which indicates that these files are probably custom-built or novel strains of malware. 

One particular finding was the discovery of a backdoor on the Tomcat kernel, a private beacon for Cobalt Strike, as well as an Android version of ToyBox that was tailored for mobile attacks. In addition, the trove revealed Kimsuky's internal phishing generator interface, known as generator.php. This interface was designed to disguise credential theft by creating seemingly authentic error pages when phishing credentials were stolen. 

Further, the file included stolen certificates that were generated by South Korea's Government Public Key Infrastructure (GPKI), as well as a Java program designed for brute-forcing key passwords. In addition to demonstrating the technical depth and persistence of the group's operations, the leak revealed the digital traces of the operators themselves, not just the technical tools that were buried in the data. 

Several records of their browsing activity linked them to suspicious GitHub accounts, a VPN service purchase through Google Pay was shown, and logs showed frequent visits to underground hacking forums as well as Taiwanese government websites. The logs of command-line sessions revealed direct connections between internal systems, and the use of translation tools suggested that operators interpreted error messages in Chinese with automated software rather than human operators. 

Observations in the logs revealed that the hackers were more productive in a structured environment, similar to an office, where activity was focused between the hours of 9 a.m. Pyongyang time and 5 p.m. Pyongyang time, reinforcing the view that these hackers are not freelancers but salaried members of a disciplined state-backed unit. 

There has been considerable discussion regarding the significance of this disclosure, which has been highlighted by cybersecurity experts, who note that the scope and depth of the leak are far more significant than isolated details. Kimsuky has been shifted in recent years from targeting Western targets to concentrating on the South Korean government and business sectors, according to researchers at ESET, the revelations confirm. 

Using the exposure, investigators have been able to establish relationships between previously separate incidents, revealing previously hidden infrastructure elements that had remained hidden until now. While experts admit this breach has undoubtedly disrupted Kimsuky's operations, they also point out that the disruptions are often temporary, even though they disrupt Kimsuky's operations.

Although nation-state groups have the resources to rebuild infrastructure, replace compromised tools, and continue campaigns, the transparency generated by this incident offers the international cybersecurity community an excellent opportunity to strengthen defences through the improvement of security protocols. Using the leaked materials as a means of attribution, researchers are able to better pinpoint future attacks, while organisations are able to take preemptive measures against similar attacks. 

According to these revelations, South Korea in particular has an urgent need to modernise its cyber defence strategy, foster greater coordination between government and private networks, and invest in homegrown security technologies that will reduce the amount of reliance on potentially vulnerable platforms. There are broader implications for the country that extend beyond that of South Korea itself. 

For the international community in general, this breach highlights the power of information sharing, transparency, and persistence against even the most secretive state-sponsored adversaries. It actually demonstrates that there is no such thing as an impenetrable shadow in which these groups operate. 

A Rare Turning Point In Cybersecurity 

It has been fascinating to catch a glimpse into the inner workings of the cyber system that thrives on secrecy and intimidation, thanks to the actions of Sabrer and Cyb0rg. Through exposing the data rather than exploiting it, they have opened the door for independent hackers to play a more important role in global security. The disruption that occurred during the hacking incident illustrates that even nation-state hackers are not beyond accountability when skill, determination, and a sense of responsibility intersect with skill and determination. 

However, even a breach like Kimsuky won't permanently dismantle such a group. The incident serves as a cautionary tale for some people regarding the dangers of digital espionage that can go unchecked. The 8-gigabyte trove is a call to action for others—a reminder that even the most entrenched adversaries can be confronted by transparency, regardless of how entrenched they become. 

The lessons derived from the 8.9 gigabyte trove will not only reverberate throughout South Korea but also throughout the cybersecurity community throughout the world. As a result of this disclosure, which stands as a turning point in an industry often defined by secrecy and silence, it may serve as a reminder to governments, businesses, and individuals alike that to remain resilient in cyberspace, people must expose what has been hidden, challenge what is threatening, and reinforce what is weak.

Orange Belgium Hit by Cyberattack Affecting 850,000 Customers

 

Orange Belgium, a major telecommunications provider and subsidiary of French telecom giant Orange Group, confirmed in August 2025 a significant cyberattack on its IT systems that resulted in unauthorized access to the personal data of approximately 850,000 customers.

The attack was detected at the end of July, after which the company swiftly activated its incident response procedures, including blocking access to the affected system, strengthening its security measures, and notifying both the relevant authorities and impacted customers. An official complaint was filed with judicial authorities, and the investigation remains ongoing.

The data accessed by the attackers included surname, first name, telephone number, SIM card number, PUK (Personal Unlocking Key) code, and tariff plan. Importantly, Orange Belgium reassured customers that no critical data—such as passwords, email addresses, or bank and financial details—were compromised in this incident. This distinction is significant, as the absence of authentication and financial data reduces, but does not eliminate, risks for affected individuals. 

Affected customers are being notified by email or text message, with advice to remain vigilant for suspicious communications, particularly phishing or impersonation attempts. The company recommends that customers exercise caution with any unexpected requests for sensitive information, as criminals may use the stolen data for social engineering attacks.

Some security experts have specifically warned about the risk of SIM swapping—whereby attackers hijack a phone number by convincing a mobile operator to transfer service to a new SIM card under their control—and advise customers to request new SIM cards and PUK codes as a precaution. 

The incident is one of several cyberattacks targeting Orange and its subsidiaries in 2025, although there is no evidence to suggest that this breach is linked to previous attacks affecting Orange’s operations in other countries. Orange Belgium operates a network serving over three million customers in Belgium and Luxembourg, making this breach one of the most significant data security incidents in the region this year. 

Criticism has emerged regarding Orange Belgium’s communication strategy, with some cybersecurity experts arguing that the company underplayed the potential risks—such as SIM swapping—and placed too much responsibility on customers to protect themselves after the breach. Despite these concerns, Orange Belgium’s response included immediate technical containment, regulatory notification, and customer outreach, aligning with standard incident response protocols for major telecom providers.

The breach highlights the persistent threat of cyberattacks against telecommunications companies, which remain attractive targets due to the vast amounts of customer data they manage. While the immediate risk of financial loss or account takeover is lower in this case due to the nature of the exposed data, the incident underscores the importance of robust cybersecurity measures and clear, transparent communication with affected users. Customers are encouraged to monitor their accounts, change passwords as a precaution, and report any suspicious activity to Orange Belgium and the authorities.

Chrome VPN Extension Exposed as Spyware Capturing User Data and Screenshots

 

A popular Chrome VPN extension with more than 100,000 installs and a verified badge has been revealed as a highly sophisticated spyware tool. Security researchers discovered that the extension secretly recorded screenshots and exfiltrated sensitive user data without consent.

The extension, identified as FreeVPN.One, posed as a legitimate privacy tool while embedding hidden surveillance functions that contradicted its stated privacy protections. Despite being promoted on the Google Chrome Web Store with featured placement and a verified status, it was designed with backdoor mechanisms that logged every webpage a user visited.

Operating under the guise of online security, the extension used a deceptive two-stage framework to monitor browsing sessions, stealing sensitive data such as banking credentials, personal chats, corporate documents, and private communications. According to Koi.Security analysts, this transformation began in April 2025, when its developers pushed updates that expanded permissions to enable large-scale data collection.

Researchers highlighted that the verified badge made the threat more dangerous, as users trusted the extension for online privacy, unaware it was functioning as spyware. The campaign is said to affect users worldwide, with stolen screenshots containing financial details, business data, and personal information being funneled to servers controlled by threat actors.

Technical Breakdown and Stealth Mechanisms

The extension’s malicious activity relies on a content script injection system deployed across all HTTP and HTTPS websites. Once a page loads, a delay of 1.1 seconds ensures full rendering before screenshots are taken. The background service worker then executes the chrome.tabs.captureVisibleTab() API to capture screenshots and transmit them to a remote server (aitd[.]one/brange.php) along with page URLs and unique identifiers.

To avoid detection, the spyware employs AES-256-GCM encryption with RSA key wrapping, making it extremely difficult for traditional network defenses to spot the malicious activity. Its permission requirements, including <all_urls>, tabs, and scripting, grant it extensive monitoring capabilities far beyond what is necessary for a normal VPN extension.

Security experts warn that the extension effectively transforms a user’s browser into an intelligence-gathering hub, with full access to personal, financial, and professional data—all without user knowledge or consent.

The Rise of the “Shadow AI Economy”: Employees Outpace Companies in AI Adoption

 




Artificial intelligence has become one of the most talked-about technologies in recent years, with billions of dollars poured into projects aimed at transforming workplaces. Yet, a new study by MIT suggests that while official AI programs inside companies are struggling, employees are quietly driving a separate wave of adoption on their own. Researchers are calling this the rise of the “shadow AI economy.”

The report, titled State of AI in Business 2025 and conducted by MIT’s Project NANDA, examined more than 300 public AI initiatives, interviewed leaders from 52 organizations, and surveyed 153 senior executives. Its findings reveal a clear divide. Only 40% of companies have official subscriptions to large language model (LLM) tools such as ChatGPT or Copilot, but employees in more than 90% of companies are using personal accounts to complete their daily work.

This hidden usage is not minor. Many workers reported turning to AI multiple times a day for tasks like drafting emails, summarizing information, or basic data analysis. These personal tools are often faster, easier to use, and more adaptable than the expensive systems companies are trying to build in-house.

MIT researchers describe this contrast as the “GenAI divide.” Despite $30–40 billion in global investments, only 5% of businesses have seen real financial impact from their official AI projects. In most cases, these tools remain stuck in test phases, weighed down by technical issues, integration challenges, or limited flexibility. Employees, however, are already benefiting from consumer AI products that require no approvals or training to start using.


The study highlights several reasons behind this divide:

1. Accessibility: Consumer tools are easy to set up, requiring little technical knowledge.

2. Flexibility: Workers can adapt them to their own workflows without waiting for management decisions.

3. Immediate value: Users see results instantly, unlike with many corporate systems that fail to show clear benefits.


Because of this, employees are increasingly choosing AI for routine tasks. The survey found that around 70% prefer AI for simple work like drafting emails, while 65% use it for basic analysis. At the same time, most still believe humans should handle sensitive or mission-critical responsibilities.

The findings also challenge some popular myths about AI. According to MIT, widespread fears of job losses have not materialized, and generative AI has yet to revolutionize business operations in the way many predicted. Instead, the problem lies in rigid tools that fail to learn, adapt, or integrate smoothly into existing systems. Internal projects built by companies themselves also tend to fail at twice the rate of externally sourced solutions.

For now, the “shadow AI economy” shows that the real adoption of AI is happening at the individual level, not through large-scale corporate programs. The report concludes that companies that recognize and build on this grassroots use of AI may be better placed to succeed in the future.



VP.NET Launches SGX-Based VPN to Transform Online Privacy

 

The virtual private network market is filled with countless providers, each promising secure browsing and anonymity. In such a crowded space, VP.NET has emerged with the bold claim of changing how VPNs function altogether. The company says it is “the only VPN that can’t spy on you,” insisting that its system is built in a way that prevents monitoring, logging, or exposing any user data. 

To support its claims, VP.NET has gone a step further by releasing its source code to the public, allowing independent verification. VP.NET was co-founded by Andrew Lee, the entrepreneur behind Private Internet Access (PIA). According to the company, its mission is to treat digital privacy as a fundamental right and to secure it through technical design rather than relying on promises or policies. Guided by its principle of “don’t trust, verify,” the provider focuses on privacy-by-design to ensure that users are always protected. 

The technology behind VP.NET relies on Intel’s SGX (Software Guard Extensions). This system creates encrypted memory zones, also called enclaves, which remain isolated and inaccessible even to the VPN provider. Using this approach, VP.NET separates a user’s identity from their browsing activity, preventing any form of link between the two. 

The provider has also built a cryptographic mixer that severs the connection between users and the websites they visit. This mixer functions with a triple-layer identity mapping system, which the company claims makes tracking technically impossible. Each session generates temporary IDs, and no data such as IP addresses, browsing logs, traffic information, DNS queries, or timestamps are stored. 

VP.NET has also incorporated traffic obfuscation features and safeguards against correlation attacks, which are commonly used to unmask VPN users. In an effort to promote transparency, VP.NET has made its SGX source code publicly available on GitHub. By doing so, users and researchers can confirm that the correct code is running, the SGX enclave is authentic, and there has been no tampering. VP.NET describes its system as “zero trust by design,” emphasizing that its architecture makes it impossible to record user activity. 

The service runs on the WireGuard protocol and includes several layers of encryption. These include ChaCha20 for securing traffic, Poly1305 for authentication, Curve25519 for key exchange, and BLAKE2s for hashing. VP.NET is compatible with Windows, macOS, iOS, Android, and Linux systems, and all platforms receive the same protections. Each account allows up to five devices to connect simultaneously, which is slightly lower than competitors like NordVPN, Surfshark, and ExpressVPN. Server availability is currently limited to a handful of countries including the US, UK, Germany, France, the Netherlands, and Japan. 

However, all servers are SGX-enabled to maintain strong privacy. While the company operates from the United States, a jurisdiction often criticized for weak privacy laws, VP.NET argues that its architecture makes the question of location irrelevant since no user data exists to be handed over. 

Despite being relatively new, VP.NET is positioning itself as part of a new wave of VPN providers alongside competitors like Obscura VPN and NymVPN, all of which are introducing fresh approaches to strengthen privacy. 

With surveillance and tracking threats becoming increasingly sophisticated, VP.NET’s SGX-based system represents a technical shift that could redefine how users think about online security and anonymity.