Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Cyber Crime. Show all posts

Armenian Man Extradited to US After Targeting Oregon Tech Firm

 

The Justice Department said Wednesday last week that an Armenian national is in federal custody on charges related to their alleged involvement in a wave of Ryuk ransomware attacks in 2019 and 2020. On June 18, Karen Serobovich Vardanyan, 33, was extradited to the United States from Ukraine. 

On June 20, he appeared in federal court and pleaded not guilty to the allegations. The seven-day jury trial Vardanyan is awaiting is set to start on August 26. The prosecution charged Vardanyan with conspiracy, computer-related fraud, and computer-related extortion Each charge carries a maximum penalty of five years in federal prison and a $250,000 fine. 

Vardanyan and his accomplices, who include 45-year-old Levon Georgiyovych Avetisyan of Armenia and two 53-year-old Ukrainians, Oleg Nikolayevich Lyulyava and Andrii Leonydovich Prykhodchenko, are charged with gaining unauthorised access to computer networks in order to install Ryuk ransomware on hundreds of compromised workstations and servers between March 2019 and September 2020. 

Lyulyava and Prykhodchenko are still at large, while Avetisyan is in France awaiting a request for extradition from the United States. According to authorities, the Ryuk ransomware was widespread in 2019 and 2020, infecting thousands of people worldwide in the private sector, state and local governments, local school districts, and critical infrastructure. 

Among these are a series of assaults on American hospitals and a technology company in Oregon, where Vardanyan is the subject of a trial by federal authorities. Ryuk ransomware attacks have affected Hollywood Presbyterian Medical Centre, Universal Health Services, Electronic Warfare Associates, a North Carolina water company, and several U.S. newspapers. 

Ryuk ransomware operators extorted victim firms by demanding Bitcoin ransom payments in exchange for decryption keys. According to Justice Department officials, Vardanyan and his co-conspirators received approximately 1,160 bitcoins in ransom payments from victim companies, totalling more than $15 million at the time.

Asia is a Major Hub For Cybercrime, And AI is Poised to Exacerbate The Problem

 

Southeast Asia has emerged as a global hotspot for cybercrimes, where human trafficking and high-tech fraud collide. Criminal syndicates operate large-scale "pig butchering" operations in nations like Cambodia and Myanmar, which are scam centres manned by trafficked individuals compelled to defraud victims in affluent markets like Singapore and Hong Kong. 

The scale is staggering: one UN estimate puts the global losses from these scams at $37 billion. And things may soon get worse. The spike in cybercrime in the region has already had an impact on politics and policy. Thailand has reported a reduction in Chinese visitors this year, after a Chinese actor was kidnapped and forced to work in a Myanmar-based scam camp; Bangkok is now having to convince tourists that it is safe to visit. Singapore recently enacted an anti-fraud law that authorises law enforcement to freeze the bank accounts of scam victims. 

But why has Asia become associated with cybercrime? Ben Goodman, Okta's general manager for Asia-Pacific, observes that the region has several distinct characteristics that make cybercrime schemes simpler to carry out. For example, the region is a "mobile-first market": popular mobile messaging apps including WhatsApp, Line, and WeChat promote direct communication between the fraudster and the victim. 

AI is also helping scammers navigate Asia's linguistic variety. Goodman observes that machine translations, although a "phenomenal use case for AI," can make it "easier for people to be baited into clicking the wrong links or approving something.” Nation-states are also becoming involved. Goodman also mentions suspicions that North Korea is hiring fake employees at major tech companies to acquire intelligence and bring much-needed funds into the isolated country. 

A new threat: Shadow AI 

Goodman is concerned about a new AI risk in the workplace: "shadow" AI, which involves individuals utilising private accounts to access AI models without firm monitoring. That could be someone preparing a presentation for a company review, going into ChatGPT on their own personal account, and generating an image.

This can result in employees unintentionally submitting private information to a public AI platform, creating "potentially a lot of risk in terms of information leakage. The lines separating your personal and professional identities may likewise be blurred by agentic AI; for instance, something associated with your personal email rather than your business one. 

And this is when it gets tricky for Goodman. Because AI agents have the ability to make decisions on behalf of users, it's critical to distinguish between users acting in their personal and professional capacities. “If your human identity is ever stolen, the blast radius in terms of what can be done quickly to steal money from you or damage your reputation is much greater,” Goodman warned.

Hidden Crypto Mining Operation Found in Truck Tied to Village Power Supply

 


In a surprising discovery, officials in Russia uncovered a secret cryptocurrency mining setup hidden inside a Kamaz truck parked near a village in the Buryatia region. The vehicle wasn’t just a regular truck, it was loaded with 95 mining machines and its own transformer, all connected to a nearby power line powerful enough to supply an entire community.


What Is Crypto Mining, and Why Is It Controversial?

Cryptocurrency mining is the process of creating digital coins and verifying transactions through a network called a blockchain — a digital ledger that can’t be altered. Computers solve complex calculations to keep this system running smoothly. However, this process demands huge amounts of electricity. For example, mining the popular coin Bitcoin consumes more power in a year than some entire countries.


Why Was This Setup a Problem?

While mining can help boost local economies and create tech jobs, it also brings risks, especially when done illegally. In this case, the truck was using electricity intended for homes without permission. The unauthorized connection reportedly caused power issues like low voltage, grid overload, and blackouts for local residents.

The illegal setup was discovered during a routine check by power inspectors in the Pribaikalsky District. Before law enforcement could step in, two people suspected of operating the mining rig escaped in a vehicle.


Not the First Incident

This wasn’t an isolated case. Authorities report that this is the sixth time this year such theft has occurred in Buryatia. Due to frequent power shortages, crypto mining is banned in most parts of the region from November through March. Even when allowed, only approved companies can operate in designated areas.


Wider Energy and Security Impacts

Crypto mining operations run 24/7 and demand a steady flow of electricity. This constant use strains power networks, increases local energy costs, and can cause outages when grids can’t handle the load. Because of this, similar mining restrictions have been put in place in other regions, including Irkutsk and Dagestan.

Beyond electricity theft, crypto mining also has ties to cybercrime. Security researchers have reported that some hacking groups secretly install mining software on infected computers. These programs run quietly, often at night, using stolen power and system resources without the owner’s knowledge. They can also steal passwords and disable antivirus tools to remain undetected.


The Environmental Cost

Mining doesn’t just hurt power grids — it also affects the environment. Many mining operations use electricity from fossil fuels, which contributes to pollution and climate change. Although a study from the University of Cambridge found that over half of Bitcoin mining now uses cleaner sources like wind, nuclear, or hydro power, a significant portion still relies on coal and gas.

Some companies are working to make mining cleaner. For example, projects in Texas and Bhutan are using renewable energy to reduce the environmental impact. But the challenge remains, crypto mining’s hunger for energy has far-reaching consequences.

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

Amid Federal Crackdown, Microsoft Warns Against Rising North Korean Jobs Scams

North Korean hackers are infiltrating high-profile US-based tech firms through scams. Recently, they have even advanced their tactics, according to the experts. In a recent investigation by Microsoft, the company has requested its peers to enforce stronger pre-employment verification measures and make policies to stop unauthorized IT management tools. 

Further investigation by the US government revealed that these actors were working to steal money for the North Korean government and use the funds to run its government operations and its weapons program.  

US imposes sanctions against North Korea

The US has imposed strict sanctions on North Korea, which restrict US companies from hiring North Korean nationals. It has led to threat actors making fake identities and using all kinds of tricks (such as VPNs) to obscure their real identities and locations. This is being done to avoid getting caught and get easily hired. 

Recently, the threat actors have started using spoof tactics such as voice-changing tools and AI-generated documents to appear credible. In one incident, the scammers somehow used an individual residing in New Jersey, who set up shell companies to fool victims into believing they were paying a legitimate local business. The same individual also helped overseas partners to get recruited. 

DoJ arrests accused

The clever campaign has now come to an end, as the US Department of Justice (DoJ) arrested and charged a US national called Zhenxing “Danny” Wanf with operating a “year-long” scam. The scheme earned over $5 million. The agency also arrested eight more people - six Chinese and two Taiwanese nationals. The arrested individuals are charged with money laundering, identity theft, hacking, sanctions violations, and conspiring to commit wire fraud.

In addition to getting paid in these jobs, which Microsoft says is a hefty payment, these individuals also get access to private organization data. They exploit this access by stealing sensitive information and blackmailing the company.

Lazarus group behind such scams

One of the largest and most infamous hacking gangs worldwide is the North Korean state-sponsored group, Lazarus. According to experts, the gang extorted billions of dollars from the Korean government through similar scams. The entire campaign is popular as “Operation DreamJob”. 

"To disrupt this activity and protect our customers, we’ve suspended 3,000 known Microsoft consumer accounts (Outlook/Hotmail) created by North Korean IT workers," said Microsoft.

International Criminal Court Hit by Advanced Cyber Attack, No Major Damage

International Criminal Court Hit by Advanced Cyber Attack, No Major Damage

Swift discovery helped the ICC

Last week, the International Criminal Court (ICC) announced that it had discovered a new advanced and targeted cybersecurity incident. Its response mechanism and prompt discovery helped to contain the attack. 

The ICC did not provide details about the attackers’ intentions, any data leaks, or other compromises. According to the statement, the ICC, which is headquartered in The Hague, the Netherlands, is conducting a threat evaluation after the attack and taking measures to address any injuries. Details about the impact were not provided. 

Collective effort against threat actors

The constant support of nations that have ratified the Rome Statute helps the ICC in ensuring its capacity to enforce its mandate and commitment, a responsibility shared by all States Parties. “The Court considers it essential to inform the public and its States Parties about such incidents as well as efforts to address them, and calls for continued support in the face of such challenges,” ICC said. 

The ICC was founded in 2002 through the Rome Statute, an international treaty, by a coalition of sovereign states, aimed to create an international court that would prosecute individuals for international crimes– war crimes, genocide, terrorism, and crimes against humanity. The ICC works as a separate body from the U.N. International Court of Justice, the latter brings cases against countries but not individuals.

Similar attack in 2023

In 2023, the ICC reported another cybersecurity incident. The attack was said to be an act of espionage and aimed at undermining the Court’s mandate. The incident had caused it to disconnect its system from the internet. 

In the past, the ICC has said that it had experienced increased security concerns as threats against its various elected officials rose. “The evidence available thus far indicates a targeted and sophisticated attack with the objective of espionage. The attack can therefore be interpreted as a serious attempt to undermine the Court's mandate," ICC said. 

The recent notable arrests issued by the ICC include Russian President Vladimir Putin and Israeli Prime Minister Benjamin Netanyahu.

Cybercriminals Shift Focus to U.S. Insurance Industry, Experts Warn

 


Cybersecurity researchers are sounding the alarm over a fresh wave of cyberattacks now targeting insurance companies in the United States. This marks a concerning shift in focus by an active hacking group previously known for hitting retail firms in both the United Kingdom and the U.S.

The group, tracked by multiple cybersecurity teams, has been observed using sophisticated social engineering techniques to manipulate employees into giving up access. These tactics have been linked to earlier breaches at major companies and are now being detected in recent attacks on U.S.-based insurers.

According to threat analysts, the attackers tend to work one industry at a time, and all signs now suggest that insurance companies are their latest target. Industry experts stress that this sector must now be especially alert, particularly at points of contact like help desks and customer support centers, where attackers often try to deceive staff into resetting credentials or granting system access.

In just the past week, two U.S. insurance providers have reported cyber incidents. One of them identified unusual activity on its systems and disconnected parts of its network to contain the damage. Another confirmed experiencing disruptions traced back to suspicious network behavior, prompting swift action to protect data and systems. In both cases, full recovery efforts are still ongoing.

The hacking group behind these attacks is known for using clever psychological tricks rather than just technical methods. They often impersonate employees or use aggressive language to pressure staff into making security mistakes. After gaining entry, they may deploy harmful software like ransomware to lock up company data and demand payment.

Experts say that defending against such threats starts with stronger identity controls. This includes limiting access to critical systems, separating user accounts with different levels of privileges, and requiring strict verification before resetting passwords or registering new devices for multi-factor authentication (MFA).

Training staff to spot impersonation attempts is just as important. These attackers may use fake phone calls, messages, or emails that appear urgent or threatening to trick people into reacting without thinking. Awareness and skepticism are key defenses.

Authorities in other countries where similar attacks have taken place have also advised companies to double-check their security setups. Recommendations include enabling MFA wherever possible, keeping a close eye on login attempts—especially from unexpected locations—and reviewing how help desks confirm a caller’s identity before making account changes.

As cybercriminals continue to evolve their methods, experts emphasize that staying informed, alert, and proactive is essential. In industries like insurance, where sensitive personal and financial data is involved, even a single breach can lead to serious consequences for companies and their customers.

AI Integration Raises Alarms Over Enterprise Data Safety

 


Today's digital landscape has become increasingly interconnected, and cyber threats have risen in sophistication, which has significantly weakened the effectiveness of traditional security protocols. Cybercriminals have evolved their tactics to exploit emerging vulnerabilities, launch highly targeted attacks, and utilise advanced techniques to breach security perimeters to gain access to and store large amounts of sensitive and mission-critical data, as enterprises continue to generate and store significant volumes of sensitive data.

In light of this rapidly evolving threat environment, organisations are increasingly forced to adopt more adaptive and intelligent security solutions in addition to conventional defences. In the field of cybersecurity, artificial intelligence (AI) has emerged as a significant force, particularly in the area of data protection. 

AI-powered data security frameworks are revolutionising the way threats are detected, analysed, and mitigated in real time, making it a transformative force. This solution enhances visibility across complex IT ecosystems, automates threat detection processes, and supports rapid response capabilities by identifying patterns and anomalies that might go unnoticed by human analysts.

Additionally, artificial intelligence-driven systems allow organisations to develop risk mitigation strategies that are scalable as well as aligned with their business objectives while implementing risk-based mitigation strategies. The integration of artificial intelligence plays a crucial role in maintaining regulatory compliance in an era where data protection laws are becoming increasingly stringent, in addition to threat prevention. 

By continuously monitoring and assessing cybersecurity postures, artificial intelligence is able to assist businesses in upholding industry standards, minimising operations interruptions, and strengthening stakeholder confidence. Modern enterprises need to recognise that AI-enabled data security is no longer a strategic advantage, but rather a fundamental requirement for safeguarding digital assets in a modern enterprise, as the cyber threat landscape continues to evolve. 

Varonis has recently revealed that 99% of organisations have their sensitive data exposed to artificial intelligence systems, a shocking finding that illustrates the importance of data-centric security. There has been a significant increase in the use of artificial intelligence tools in business operations over the past decade. The State of Data Security: Quantifying Artificial Intelligence's Impact on Data Risk presents an in-depth analysis of how misconfigured settings, excessive access rights and neglected security gaps are leaving critical enterprise data vulnerable to AI-driven exploitation. 

An important characteristic of this report is that it relies on extensive empirical analysis rather than opinion surveys. In order to evaluate the risk associated with data across 1,000 organisations, Varonis conducted a comprehensive analysis of data across a variety of cloud computing environments, including the use of over 10 billion cloud assets and over 20 petabytes of sensitive data. 

Among them were platforms such as Amazon Web Services, Google Cloud Services, Microsoft Azure Services, Microsoft 365 Services, Salesforce, Snowflake, Okta, Databricks, Slack, Zoom, and Box, which provided a broad and realistic picture of enterprise data exposure in the age of Artificial Intelligence. The CEO, President, and Co-Founder of Varonis, Yaaki Faitelson, stressed the importance of balancing innovation with risk, noting that, even though AI is undeniable in increasing productivity, it also poses serious security issues. 

Due to the growing pressure on CIOs and CISOs to adopt artificial intelligence technologies at a rapid rate, advanced data security platforms are in increasing demand. It is important to take a proactive, data-oriented approach to cybersecurity to prevent AI from becoming a gateway to large-scale data breaches, says Faitelson. It is important to note that researchers are also exploring two critical dimensions of risk as they relate to large language models (LLMs) as well as AI copilots: human-to-machine interaction and machine-to-machine integrity, which are both critical aspects of risk pertaining to AI-driven data exposure. 

A key focus of the study was on how sensitive data, such as employee compensation details, intellectual property rights, proprietary software, and confidential research and development insights able to be unintentionally accessed, leaked, or misused by using just a single prompt into an artificial intelligence interface if it is not protected. As AI assistants are being increasingly used throughout departments, the risk of inadvertently disclosing critical business information has increased considerably. 

Additionally, two categories of risk should be addressed: the integrity and trustworthiness of the data used to train or enhance artificial intelligence systems. It is common for machine-to-machine vulnerabilities to arise when flawed, biased, or deliberately manipulated datasets are introduced into the learning cycle of machine learning algorithms. 

As a consequence of such corrupted data, it can result in far-reaching and potentially dangerous consequences. For example, inaccurate or falsified clinical information could lead to life-saving medical treatments being developed, while malicious actors may embed harmful code within AI training pipelines, introducing backdoors or vulnerabilities to applications that aren't immediately detected at first. 

The dual-risk framework emphasises the importance of tackling artificial intelligence security holistically, one that takes into account the entire lifecycle of data, from acquisition and input to training and deployment, not just the user-level controls. Considering both human-induced and systemic risks associated with generative AI tools, organisations can implement more resilient safeguards to ensure that their most valuable data assets are protected as much as possible. 

Organisations should reconsider and go beyond conventional governance models to secure sensitive data in the age of AI. In an environment where AI systems require dynamic, expansive access to vast datasets, traditional approaches to data protection -often rooted in static policies and role-based access -are no longer sufficient. 

Towards the future of AI-ready security, a critical balance must be struck between ensuring robust protection against misuse, leakage, and regulatory non-compliance, while simultaneously enabling data access for innovation. Organisations need to adopt a multilayered, forward-thinking security strategy customised for AI ecosystems to meet these challenges. 

It is important to note that some key components of a data-tagging and classification strategy are the identification and categorisation of sensitive information to determine how it should be handled depending on the criticality of the information. As a replacement for role-based access control (RBAC), attribute-based access control (ABAC) should allow for more granular access policies based on the identity of the user, context, and the sensitivity of the data. 

Aside from that, organisations need to design data pipelines that are AI-aware and incorporate proactive security checkpoints into them so as to monitor how their data is used by artificial intelligence tools. Additionally, output validation becomes crucial—it involves implementing mechanisms that ensure outputs generated by artificial intelligence are compliant, accurate, and potentially risky before they are circulated internally or externally. 

The complexity of this landscape has only been compounded by the rise of global regulations and regional regulations that govern data protection and artificial intelligence. In addition to the general data privacy frameworks of GDPR and CCPA, businesses will now need to prepare themselves for emerging AI-specific regulations that will put a stronger emphasis on how AI systems access and process sensitive data. As a result of this regulatory evolution, organisations need to maintain a security posture that is both agile and anticipatable.

Matillion Data Productivity Cloud, for instance, is a solution that embodies this principle of "secure by design". As a hybrid cloud SaaS platform tailored to enterprise environments, Matillion has created a platform that is well-suited to secure enterprise environments. 

With its standardised encryption and authentiyoucation protocols, the platform is easily integrated into enterprise networks through the use of a secure cloud infrastructure. This platform is built around a pushdown architecture that prevents customer data from leaving the organisation's own cloud environment while allowing advanced orchestration of complex data workflows in order to minimise the risk of data exposure.

Rather than focusing on data movement, Matillion's focus is on metadata management and workflow automation, providing organisations with a secure, efficient data operation, allowing them to gain insights faster with a higher level of data integrity and compliance. Organisations must move towards a paradigm shift—where security is woven into the fabric of the data lifecycle—as AI poses a dual pressure on organisations. 

A shift from traditional governance systems to more adaptive, intelligent frameworks will help secure data in the AI era. Because AI systems require broad access to enterprise data, organisations must strike a balance between openness and security. To achieve this, data can be tagged and classified and attributes can be used to manage access precisely, attribute-based access controls should be implemented for precise control of access, and AI-aware data pipelines must be built with security checks, and output validation must be performed to prevent the distribution of risky or non-compliant AI-generated results. 

With the rise of global and AI-specific regulations, companies need to develop compliance strategies that will ensure future success. Matillion Data Productivity Cloud is an example of a platform which offers a secure-by-design solution, as it combines a hybrid SaaS architecture with enterprise-grade security and security controls. 

Through its pushdown processing, the customer's data will stay within the organisation's cloud environment while the workflows are orchestrated safely and efficiently. In this way, organisations can make use of AI confidently without sacrificing data security or compliance with the laws and regulations. As artificial intelligence and enterprise data security rapidly evolve, organisations need to adopt a future-oriented mindset that emphasises agility, responsibility, and innovation. 

It is no longer possible to rely on reactive cybersecurity; instead, businesses must embrace AI-literate governance models, advance threat intelligence capabilities, and secure infrastructures designed with security in mind. Data security must be embedded into all phases of the data lifecycle, from creation and classification to accessing, analysing, and transforming it with AI. Developing a culture of continuous risk evaluation is a must for leadership teams, and IT and data teams must be empowered to collaborate with compliance, legal, and business units proactively. 

In order to maintain trust and accountability, it will be imperative to implement clear policies regarding AI usage, ensure traceability in data workflows, and establish real-time auditability. Further, with the maturation of AI regulations and the increasing demands for compliance across a variety of sectors, forward-looking organisations should begin aligning their operational standards with global best practices rather than waiting for mandatory regulations to be passed. 

A key component of artificial intelligence is data, and the protection of that foundation is a strategic imperative as well as a technical obligation. By putting the emphasis on resilient, ethical, and intelligent data security, today's companies will not only mitigate risk but will also be able to reap the full potential of AI tomorrow.

FBI Warns: Millions of Everyday Smart Devices Secretly Hijacked by Cybercriminals

 



The FBI recently raised concerns about a large-scale cybercrime network that has quietly taken control of millions of smart gadgets used in homes across the United States. This cyber threat, known as BADBOX 2.0, targets everyday devices such as TV streaming boxes, digital projectors, tablets, and even entertainment systems in cars.


What is BADBOX 2.0?

Unlike common malware that slows down or damages devices, BADBOX 2.0 silently turns these gadgets into part of a hidden network called a residential proxy network. This setup allows cybercriminals to use the victim's internet connection to carry out illegal activities, including online advertising fraud and data theft, without the device owner realizing anything is wrong.


Which Devices Are at Risk?

According to the FBI, the types of devices most affected include:

1. TV streaming boxes

2. Digital projectors

3. Aftermarket car infotainment systems

4. Digital photo frames

Many of these products are imported, often sold under unfamiliar or generic brand names. Some specific models involved in these infections belong to device families known as TV98 and X96, which are still available for purchase on popular online shopping platforms.


How Does the Infection Spread?

There are two main ways these devices become part of the BADBOX 2.0 network:

Pre-installed Malware: Some gadgets are already infected before they are even sold. This happens when malicious software is added during the manufacturing or shipping process.

Dangerous App Downloads: When setting up these devices, users are sometimes directed to install apps from unofficial sources. These apps can secretly install harmful software that gives hackers remote access.

This method shows how BADBOX 2.0 has advanced from its earlier version, which focused mainly on malware hidden deep within the device's firmware.


Signs Your Device May Be Infected

Users should watch for warning signs such as:

• The device asks to disable security protections like Google Play Protect.

• The brand is unfamiliar or seems generic.

• The device promises free access to paid content.

• You are prompted to download apps from unknown stores.

• Unusual or unexplained internet activity appears on your home network.


How to Stay Safe

The FBI recommends several steps to protect your home network:

1. Only use trusted app stores, like Google Play or Apple’s App Store.

2. Be cautious with low-cost, no-name devices. Extremely cheap gadgets are often risky.

3. Monitor your network regularly for unfamiliar devices or strange internet traffic.

4. Keep devices updated by installing the latest security patches and software updates.

5. If you believe one of your devices may be compromised, it is best to disconnect it immediately from your network and report the issue to the FBI through their official site at www.ic3.gov.

6. Be Careful with Cheap Deals


As experts warn, extremely low prices can sometimes hide dangerous risks. If something seems unusually cheap, it could come with hidden cyber threats.