Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Resume. Show all posts

Millions of Resumes Exposed Online Due to Unsecured Hiring Platform

 



A major data exposure has come to light after cybersecurity experts discovered an unsecured online storage system containing nearly 26 million documents, many of which appear to be resumes of job seekers in the United States.

The exposed files were found in a cloud-based storage system, commonly used to save and share digital files. According to the research team, this storage space had not been properly secured, meaning anyone who knew where to look could access its contents without needing a password or any special permissions.

On further examination, it was revealed that the majority of the documents stored in the system were personal resumes and CVs. These files included sensitive personal details like full names, phone numbers, email addresses, education history, previous work experience, and other professional information. In the wrong hands, such detailed personal data can become a serious security risk.

Experts warn that job seekers are particularly vulnerable in situations like this. If cybercriminals gain access to such data, they can use it to send highly personalized scam messages. These messages may appear trustworthy, as they can be tailored using real employment history or job interests, making it easier to trick someone into clicking a malicious link or sharing their login information.

One common tactic includes sending fake job offers or interview invitations that secretly install harmful software on a person’s device. Some advanced scams may even go as far as conducting fake job interviews before sending victims "sample tasks" that involve downloading malware.

The database in question was linked to a platform used by employers and hiring teams to manage job applications and connect with candidates. However, the researchers who found the issue say they did not receive any confirmation that access to the exposed files has been blocked. While the team reached out to suggest tightening security settings, it’s unclear whether any action was taken.

There is no current proof that the data has been used by cybercriminals yet, but experts note that the longer the files remain unprotected, the higher the risk of misuse. Even if no signs of abuse have appeared so far, the availability of such information online creates an ongoing threat.

This situation serves as a reminder for companies handling sensitive data to prioritize cybersecurity. Properly configuring cloud storage, regularly updating access settings, and limiting who can view certain files are essential steps in preventing such exposures. It’s not just about protecting a system, it’s about safeguarding real people’s identities and futures.


Fake Candidates, Real Threat: Deepfake Job Applicants Are the New Cybersecurity Challenge

 

When voice authentication firm Pindrop Security advertised an opening for a senior engineering role, one resume caught their attention. The candidate, a Russian developer named Ivan, appeared to be a perfect fit on paper. But during the video interview, something felt off—his facial expressions didn’t quite match his speech. It turned out Ivan wasn’t who he claimed to be.

According to Vijay Balasubramaniyan, CEO and co-founder of Pindrop, Ivan was a fraudster using deepfake software and other generative AI tools in an attempt to secure a job through deception.

“Gen AI has blurred the line between what it is to be human and what it means to be machine,” Balasubramaniyan said. “What we’re seeing is that individuals are using these fake identities and fake faces and fake voices to secure employment, even sometimes going so far as doing a face swap with another individual who shows up for the job.”

While businesses have always had to protect themselves against hackers targeting vulnerabilities, a new kind of threat has emerged: job applicants powered by AI who fake their identities to gain employment. From forged resumes and AI-generated IDs to scripted interview responses, these candidates are part of a fast-growing trend that cybersecurity experts warn is here to stay.

In fact, a Gartner report predicts that by 2028, 1 in 4 job seekers globally will be using some form of AI-generated deception.

The implications for employers are serious. Fraudulent hires can introduce malware, exfiltrate confidential data, or simply draw salaries under false pretenses.

A Growing Cybercrime Strategy

This problem is especially acute in cybersecurity and crypto startups, where remote hiring makes it easier for scammers to operate undetected. Ben Sesser, CEO of BrightHire, noted a massive uptick in these incidents over the past year.

“Humans are generally the weak link in cybersecurity, and the hiring process is an inherently human process with a lot of hand-offs and a lot of different people involved,” Sesser said. “It’s become a weak point that folks are trying to expose.”

This isn’t a problem confined to startups. Earlier this year, the U.S. Department of Justice disclosed that over 300 American companies had unknowingly hired IT workers tied to North Korea. The impersonators used stolen identities, operated via remote networks, and allegedly funneled salaries back to fund the country’s weapons program.

Criminal Networks & AI-Enhanced Resumes

Lili Infante, founder and CEO of Florida-based CAT Labs, says her firm regularly receives applications from suspected North Korean agents.

“Every time we list a job posting, we get 100 North Korean spies applying to it,” Infante said. “When you look at their resumes, they look amazing; they use all the keywords for what we’re looking for.”

To filter out such applicants, CAT Labs relies on ID verification companies like iDenfy, Jumio, and Socure, which specialize in detecting deepfakes and verifying authenticity.

The issue has expanded far beyond North Korea. Experts like Roger Grimes, a longtime computer security consultant, report similar patterns with fake candidates originating from Russia, China, Malaysia, and South Korea.

Ironically, some of these impersonators end up excelling in their roles.

“Sometimes they’ll do the role poorly, and then sometimes they perform it so well that I’ve actually had a few people tell me they were sorry they had to let them go,” Grimes said.

Even KnowBe4, the cybersecurity firm Grimes works with, accidentally hired a deepfake engineer from North Korea who used AI to modify a stock photo and passed through multiple background checks. The deception was uncovered only after suspicious network activity was flagged.

What Lies Ahead

Despite a few high-profile incidents, most hiring teams still aren’t fully aware of the risks posed by deepfake job applicants.

“They’re responsible for talent strategy and other important things, but being on the front lines of security has historically not been one of them,” said BrightHire’s Sesser. “Folks think they’re not experiencing it, but I think it’s probably more likely that they’re just not realizing that it’s going on.”

As deepfake tools become increasingly realistic, experts believe the problem will grow harder to detect. Fortunately, companies like Pindrop are already developing video authentication systems to fight back. It was one such system that ultimately exposed “Ivan X.”

Although Ivan claimed to be in western Ukraine, his IP address revealed he was operating from a Russian military base near North Korea, according to the company.

Pindrop, backed by Andreessen Horowitz and Citi Ventures, originally focused on detecting voice-based fraud. Today, it may be pivoting toward defending video and digital hiring interactions.

“We are no longer able to trust our eyes and ears,” Balasubramaniyan said. “Without technology, you’re worse off than a monkey with a random coin toss.”