Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Fraud. Show all posts

Evaly Website Allegedly Hacked Amid Legal Turmoil, Hacker Threatens to Leak Customer Data

 

Evaly, the controversial e-commerce platform based in Bangladesh, appeared to fall victim to a cyberattack on 24 May 2025. Visitors to the site were met with a stark warning reportedly left by a hacker, claiming to have obtained the platform’s customer data and urging Evaly staff to make contact.

Displayed in bold capital letters, the message read: “HACKED, I HAVE ALL CUSTOMER DATA. EVALY STAFF PLEASE CONTACT 00watch@proton.me.” The post included a threat, stating, “OR ELSE I WILL RELEASE THIS DATA TO THE PUBLIC,” signaling the potential exposure of private user information if the hacker’s demand is ignored.

It remains unclear what specific data was accessed or whether sensitive financial or personal details were involved. So far, Evaly has not released any official statement addressing the breach or the nature of the compromised information.

This development comes on the heels of a fresh wave of legal action against Evaly and its leadership. On 13 April 2025, state-owned Bangladesh Sangbad Sangstha (BSS) reported that a Dhaka court handed down three-year prison sentences to Evaly’s managing director, Mohammad Rassel, and chairperson, Shamima Nasrin, in a fraud case.

Dhaka Metropolitan Magistrate M Misbah Ur Rahman delivered the judgment, which also included fines of BDT 5,000 each. The court issued arrest warrants for both executives following the ruling.

The case was filed by a customer, Md Rajib, who alleged that he paid BDT 12.37 lakh for five motorcycles that were never delivered. The transaction took place through Evaly’s website, which had gained attention for its deep discount offers and aggressive promotional tactics.

Account Takeover Fraud Surges as Cybercriminals Outpace Traditional Bank Defenses

 

As financial institutions bolster their fraud prevention systems, scammers are shifting tactics—favoring account takeover (ATO) fraud over traditional scams. Instead of manipulating victims into making transactions themselves, fraudsters are bypassing them entirely, taking control of their digital identities and draining funds directly.

Account takeover fraud involves unauthorized access to an individual's account to conduct fraudulent transactions. This form of cybercrime has seen a sharp uptick in recent years as attackers use increasingly advanced techniques—such as phishing, credential stuffing, and malware—to compromise online banking platforms. Conventional fraud detection tools, which rely on static behavior analysis, often fall short as bad actors now mimic legitimate user actions with alarming accuracy.

According to NICE Actimize's 2025 Fraud Insights U.S. Retail Payments report, the share of account takeover incidents has increased in terms of the total value of fraud attempts between 2023 and 2024. Nevertheless, scams continue to dominate, making up 57% of all attempted fraud transactions.

Global financial institutions witnessed a significant spike in ATO-related incidents in 2024. Veriff's Identity Fraud Report recorded a 13% year-over-year rise in ATO fraud. FinCEN data further supports this trend, revealing that U.S. banks submitted more than 178,000 suspicious activity reports tied to ATO—a 36% increase from the previous year. AARP and Javelin Strategy & Research estimated that ATO fraud was responsible for $15.6 billion in losses in 2024.

Experts emphasize the need to embrace AI-powered behavioral biometrics, which offer real-time identity verification by continuously assessing how users interact with their devices. This shift from single-point login checks to ongoing authentication enables better threat detection while enhancing user experience. These systems adapt to variables such as device type, location, and time of access, supporting the NIST-recommended zero trust framework.

"The most sophisticated measurement approaches now employ AI analytics to establish dynamic baselines for these metrics, enabling continuous ROI assessment as both threats and solutions evolve over time," said Jeremy London, director of engineering for AI and threat analytics at Keeper Security.

Emerging Fraud Patterns
The growth of ATO fraud is part of a larger evolution in cybercrime tactics. Cross-border payments are increasingly targeted. Although international wire transfers declined by 6% in 2024, the dollar value of fraud attempts surged by 40%. Fraudsters are now focusing on high-value, low-volume transactions.

One particularly vulnerable stage is payee onboarding. Research shows that 67% of fraud incidents were linked to just 7% of transactions—those made to newly added payees. This finding suggests that cybercriminals are exploiting the early stages of payment relationships as a critical vulnerability.

Looking ahead, integrating multi-modal behavioral signals with AI-trained models to detect sophisticated threats will be key. This hybrid approach is vital for identifying both human-driven and synthetic fraud attempts in real-time.

Cybercriminals Target Social Security Users with Sophisticated Phishing Scam

 

A new wave of phishing attacks is exploiting public trust in government agencies. Cybercriminals are sending fraudulent emails that appear to come from the Social Security Administration (SSA), aiming to trick recipients into downloading a remote access tool that gives hackers full control over their computers, according to a report by Malwarebytes.

The scam emails, often sent from compromised WordPress websites, claim to offer a downloadable Social Security statement. However, the entire message is typically embedded as an image—a tactic that allows it to bypass most email filters. Clicking on the link initiates the installation of ScreenConnect, a powerful malware tool that enables attackers to infiltrate your device remotely.

The campaign has been attributed to a phishing group known as Molatori, whose goal is to extract personal, banking, and other sensitive information. “Once in, the attackers can steal your data, commit financial fraud, and engage in identity theft,” the report warns.

To avoid falling victim, experts suggest staying alert to red flags. These scam emails often contain poor grammar, missing punctuation, strange formatting, and unusual colour schemes for links. Such errors—evident in screenshots shared by Malwarebytes and the SSA—are clear signs of a scam, even as AI-driven tactics make phishing attempts more convincing than ever.

“If you want to view your Social Security statement, the safest option is to visit ssa.gov,” the SSA advises.

What to Do If  You're Targeted:

  • Cut off all communication with the scammer
  • Report the incident to the SSA Office of the Inspector General (OIG)
  • File a report with your local police
  • If you've lost money, submit a complaint to the FBI’s Internet Crime Complaint Center (IC3)

As phishing threats continue to evolve, cybersecurity awareness remains your best defense.


Barclays Introduces New Step-by-Step Model to Tackle Modern Fraud

 


Banks and shops are facing more advanced types of fraud that mix online tricks with real-world scams. To fight back, experts from Barclays and a security company called Threat Fabric have created a detailed model to understand how these frauds work from start to finish. This system is called a fraud kill chain, and it helps organizations break down and respond to fraud at every stage.


What Is a Kill Chain?

The kill chain idea originally came from the military. It was used to describe each step of an attack so it could be stopped in time. In 2011, cybersecurity experts started using it to map out how hackers attack computer systems. This helped security teams block online threats like viruses, phishing emails, and ransomware.

But fraud doesn’t always follow the same patterns as hacking. It often includes human error, emotional tricks, and real-life actions. That’s why banks like Barclays needed a different version of the kill chain made specifically for financial fraud.


Why Fraud Needs a New Framework

Barclays noticed a new type of scam using tap-to-pay systems—also known as NFC, or near-field communication. This technology lets people pay by simply tapping their cards or phones. Criminals found ways to misuse this by copying the signals and using them without permission.

When Barclays and Threat Fabric studied these scams, they realized that the NFC trick was just one part of a larger process. There were many steps before and after it. But there was no clear way for banks and retailers to explain or share all this information. So, they created a new model to organize it all.


How the Fraud Kill Chain Works

The new fraud kill chain has ten steps. It starts with the fraudsters gathering data about victims and moves through stages like emotional manipulation, fake messages, stealing passwords, getting into accounts, and finally taking and hiding the money. Each of these steps includes different tricks and techniques.

For example, a scam might begin with a fake text message asking the victim to click a link. Once the victim enters their details, criminals can add their card to a device and make payments from far away. This kind of attack is sometimes called a ghost tap.


Retailers Use Their Own Version

Retail companies like Target are also building similar models. They’ve found that even simple scams, like messing with gift cards, involve many people and actions. Without a clear way to describe each part, it's hard for teams to stop them in time.

By using a structured approach to fraud, companies can better understand how scams happen, spot weak points, and stop future attacks. This new model helps everyone speak the same language when it comes to stopping fraud—and protects people from losing their money.

Massive Data Leak Exposes 520,000+ Ticket Records from Resale Platform 'Ticket to Cash'

 

A critical security lapse at online ticket resale platform Ticket to Cash has led to a major data breach, exposing over 520,000 records, according to a report by vpnMentor. The leak was first uncovered by cybersecurity researcher Jeremiah Fowler, who found the unsecured and unencrypted database without any password protection.

The database, weighing in at a massive 200 GB, contained a mix of PDFs, images, and JSON files. Among the leaked files were thousands of concert and live event tickets, proof of transfers, and receipt screenshots. Alarmingly, many documents included personally identifiable information (PII) such as full names, email addresses, physical addresses, and partial credit card details.

Using the internal structure and naming conventions within the files, Fowler traced the data back to Ticket to Cash, a company that facilitates ticket resale through over 1,000 partner websites. “Despite contacting TicketToCash.com through a responsible disclosure notice,” Fowler reported, “I initially received no response, and the database remained publicly accessible.” It wasn’t until four days later, following a second notice, that the data was finally secured. By then, an additional 2,000+ files had been exposed.

The responsible party behind maintaining the database—whether Ticket to Cash or a third-party contractor—remains uncertain. It’s also unknown how long the database was left open or whether it had been accessed by malicious actors. “Only a thorough internal forensic investigation could provide further clarity,” Fowler emphasized.

Ticket to Cash enables users to list tickets without upfront fees, taking a cut only when sales occur. However, the company has faced criticism over customer service, particularly regarding payment delays via PayPal and difficulty reaching support. Fowler also noted the lack of prompt communication during the disclosure process.

This breach raises serious concerns over data privacy and cybersecurity practices in the digital ticketing world. Leaked PII and partial financial information are prime targets for identity theft and fraud, posing risks well beyond the original ticketed events. As online ticketing becomes more widespread, this incident serves as a stark reminder of the need for strong security protocols and rapid response mechanisms to safeguard user data.

Global Cybercrime Crackdown Dismantles Major Phishing-as-a-Service Platform ‘LabHost’

 

In a major international crackdown, a law enforcement operation spearheaded by the London Metropolitan Police and coordinated by Europol has successfully taken down LabHost, one of the most notorious phishing-as-a-service (PhaaS) platforms used by cybercriminals worldwide.

Between April 14 and April 17, 2024, authorities carried out synchronized raids across 70 different sites globally, resulting in the arrest of 37 individuals. Among those arrested were four suspects in the UK believed to be the platform’s original creators and administrators. Following the arrests, LabHost’s digital infrastructure was completely dismantled.

LabHost had gained infamy for its ease of use and wide accessibility, making it a go-to cybercrime tool. The service offered more than 170 fake website templates imitating trusted brands from the banking, telecom, and logistics sectors—allowing users to craft convincing phishing campaigns with minimal effort.

According to authorities, LabHost supported over 40,000 phishing domains and catered to approximately 10,000 users across the globe. The coordinated enforcement effort was supported by Europol’s European Cybercrime Centre (EC3) and the Joint Cybercrime Action Taskforce (J-CAT), with 19 countries actively participating in the investigation.

LabHost showcased how cybercrime has become industrialized through subscription-based platforms. For a monthly fee of around $249, subscribers could access phishing kits, fraudulent websites, hosting services, and even tools to interact with victims in real-time.

One of its most dangerous features was LabRat, an integrated dashboard that enabled users to monitor ongoing phishing attacks. This tool also allowed cybercriminals to intercept two-factor authentication codes and login credentials, effectively bypassing modern security measures.

Its user-friendly interface eliminated the need for technical skills—opening the door for anyone with malicious intent and a credit card to launch sophisticated phishing schemes. The platform's popularity contributed to a spike in identity theft, financial fraud, and widespread data breaches.

Authorities hailed the takedown as a milestone in the fight against cybercrime. However, they also cautioned that the commoditization of cybercrime remains a serious concern.

"This is a critical blow to phishing infrastructure," cybersecurity experts said, "but the ease of recreating similar platforms continues to pose a major threat."

Following the seizure of LabHost’s backend systems, law enforcement agencies have begun analyzing the data to identify the perpetrators and their victims. This will mark the beginning of a new wave of investigations and preventative measures.

The operation involved agencies from 19 countries, including the FBI and Secret Service from the United States, as well as cybercrime units in Canada, Germany, the Netherlands, Poland, Spain, Australia, and the UK. This unprecedented level of international cooperation highlights the cross-border nature of cyber threats and the importance of unified global action.

As authorities prepare for a fresh wave of prosecutions, the LabHost takedown stands as a defining moment in cyber law enforcement—both in its impact and its symbolism.

Fake Candidates, Real Threat: Deepfake Job Applicants Are the New Cybersecurity Challenge

 

When voice authentication firm Pindrop Security advertised an opening for a senior engineering role, one resume caught their attention. The candidate, a Russian developer named Ivan, appeared to be a perfect fit on paper. But during the video interview, something felt off—his facial expressions didn’t quite match his speech. It turned out Ivan wasn’t who he claimed to be.

According to Vijay Balasubramaniyan, CEO and co-founder of Pindrop, Ivan was a fraudster using deepfake software and other generative AI tools in an attempt to secure a job through deception.

“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”

While businesses have always had to protect themselves against hackers targeting vulnerabilities, a new kind of threat has emerged: job applicants powered by AI who fake their identities to gain employment. From forged resumes and AI-generated IDs to scripted interview responses, these candidates are part of a fast-growing trend that cybersecurity experts warn is here to stay.

In fact, a Gartner report predicts that by 2028, 1 in 4 job seekers globally will be using some form of AI-generated deception.

The implications for employers are serious. Fraudulent hires can introduce malware, exfiltrate confidential data, or simply draw salaries under false pretenses.

A Growing Cybercrime Strategy

This problem is especially acute in cybersecurity and crypto startups, where remote hiring makes it easier for scammers to operate undetected. Ben Sesser, CEO of BrightHire, noted a massive uptick in these incidents over the past year.

“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”

This isn’t a problem confined to startups. Earlier this year, the U.S. Department of Justice disclosed that over 300 American companies had unknowingly hired IT workers tied to North Korea. The impersonators used stolen identities, operated via remote networks, and allegedly funneled salaries back to fund the country’s weapons program.

Criminal Networks & AI-Enhanced Resumes

Lili Infante, founder and CEO of Florida-based CAT Labs, says her firm regularly receives applications from suspected North Korean agents.

“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”

To filter out such applicants, CAT Labs relies on ID verification companies like iDenfy, Jumio, and Socure, which specialize in detecting deepfakes and verifying authenticity.

The issue has expanded far beyond North Korea. Experts like Roger Grimes, a longtime computer security consultant, report similar patterns with fake candidates originating from Russia, China, Malaysia, and South Korea.

Ironically, some of these impersonators end up excelling in their roles.

“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.

Even KnowBe4, the cybersecurity firm Grimes works with, accidentally hired a deepfake engineer from North Korea who used AI to modify a stock photo and passed through multiple background checks. The deception was uncovered only after suspicious network activity was flagged.

What Lies Ahead

Despite a few high-profile incidents, most hiring teams still aren’t fully aware of the risks posed by deepfake job applicants.

“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” said BrightHire’s Sesser. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”

As deepfake tools become increasingly realistic, experts believe the problem will grow harder to detect. Fortunately, companies like Pindrop are already developing video authentication systems to fight back. It was one such system that ultimately exposed “Ivan X.”

Although Ivan claimed to be in western Ukraine, his IP address revealed he was operating from a Russian military base near North Korea, according to the company.

Pindrop, backed by Andreessen Horowitz and Citi Ventures, originally focused on detecting voice-based fraud. Today, it may be pivoting toward defending video and digital hiring interactions.

“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”

Malicious PyPi Package ‘disgrasya’ Exploits WooCommerce Stores for Card Fraud, Downloaded Over 34,000 Times

 

A newly uncovered malicious Python package on PyPi, named ‘disgrasya’, has raised serious concerns after it was discovered exploiting WooCommerce-powered e-commerce sites to validate stolen credit card information. Before its removal, the package had been downloaded more than 34,000 times, signaling significant abuse within the developer ecosystem.

The tool specifically targeted WooCommerce sites using the CyberSource payment gateway, enabling threat actors to mass-test stolen credit card data obtained from dark web sources and data breaches. This process, known as carding, helps cybercriminals determine which cards are active and usable.

While PyPi has since removed the package, its high download count reveals the widespread exploitation of open-source platforms for illicit operations.

"Unlike typical supply chain attacks that rely on deception or typosquatting, disgrasya made no attempt to appear legitimate," explains a report by Socket researchers.

"It was openly malicious, abusing PyPI as a distribution channel to reach a wider audience of fraudsters."

What sets ‘disgrasya’ apart is the transparency of its malicious intent. Unlike other deceptive packages that mask their true purpose, this one openly advertised its illicit capabilities in the description:

"A utility for checking credit cards through multiple gateways using multi-threading and proxies."

According to Socket, version 7.36.9 of the package introduced the core malicious features, likely bypassing stricter checks typically applied to initial versions.

The malicious script mimics legitimate shopping behavior by accessing real WooCommerce stores, identifying product IDs, and adding items to the cart. It then proceeds to the checkout page, where it harvests the CSRF token and CyberSource’s capture context—sensitive data used to securely process card payments.

Socket explains that these tokens are typically short-lived and hidden, but the script captures them instantly while populating the form with fake customer details.

Instead of sending the card details directly to CyberSource, the data is routed to a malicious server (railgunmisaka.com) that impersonates the legitimate payment gateway. The server returns a fake token, which the script uses to complete the checkout process on the real store. If the transaction is successful, the card is validated; otherwise, it moves on to the next.

"This entire workflow—from harvesting product IDs and checkout tokens, to sending stolen card data to a malicious third party, and simulating a full checkout flow—is highly targeted and methodical," says Socket.

"It is designed to blend into normal traffic patterns, making detection incredibly difficult for traditional fraud detection systems."

This fully automated workflow makes it easier for attackers to validate thousands of cards at scale—cards which can then be used for financial fraud or sold on underground marketplaces.

Socket also warns that traditional fraud detection systems are ill-equipped to catch these types of attacks due to their highly realistic emulation of customer behavior.

Despite the sophistication of the operation, Socket researchers suggest some measures to reduce vulnerability:
  • Block very low-value transactions (typically under $5), often used in carding tests.
  • Monitor for high failure rates on small orders from the same IP address or geographic region.
  • Implement CAPTCHA verification during checkout flows to disrupt automated tools.
  • Apply rate limiting on checkout and payment endpoints to slow down or block suspicious behavior.