Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Scammer. Show all posts

Classic Scam the Scammers? Epic Games Hackers Faked the Hack

epic gaming scam

Hackers stage Epic hack

A group announced earlier this week that they had successfully breached Epic Games and taken 189GB of data, including user information. They are now retracting their statements, claiming that they staged the whole event to deceive real hackers.

The group, which goes by the online handle Mogilevich, claims to have accomplished this by promising to sell potential hackers the technology needed to get access to Epic Games. Naturally, the technology and data they sent on—assuming they sent any—would be worthless if the attack had never occurred. According to Mogilevich, it sold this information to eight customers without demonstrating its ability to breach an organization such as Epic.

Epic gaming scam developments

Only a few days have passed since the "hack" was originally made public. After allegedly stealing "emails, passwords, full names, payment information, source code" from its assault on Epic, Mogilevich appeared to be attempting to ransom the data back to the business.

However, Mogilevich has since altered the narrative entirely. Since it's possible that the gang pulled off a hack and this was all misdirection, we cannot confirm whether or not their account of events is accurate. It does, however, correspond with Epic's statement that there was "zero evidence" of any hacking at all.

A Mogilevich member is said to have said, "You may be wondering why all this, and now I'm going to explain everything you need," on a page that it had previously promised would contain information from the Epic breach. "In reality, we are not a ransomware-as-a-service, but professional fraudsters."

Gang aimed to get new contacts

In explaining its methodology, Mogilevich claims that it staged the operation to make fresh connections for fraud. As per the gang, everything went as planned in this aspect, with aspiring hackers reportedly sending over tens of thousands of dollars.

"We don't think of ourselves as hackers but rather as criminal geniuses, if you can call us that", the message continues. They acknowledge that their goal was to acquire access to new "victims to scam," but ideally, users and employees of Epic Games are not among these victims.

Epic still needs to respond to this revelation.


Deepfake Deception: Man Duped of Rs 5 Crore as Chinese Scammer Exploits AI Technology

 

A recent incident has shed light on the alarming misuse of artificial intelligence (AI) through the deployment of advanced 'deepfake' technology, in which a man was deceived into losing a substantial amount of money exceeding Rs 5 crore. Deepfakes, which leverage AI capabilities to generate counterfeit images and videos, have raised concerns due to their potential to spread misinformation.

According to a recent report by Reuters, the perpetrator employed AI-powered face-swapping technology to impersonate the victim's close acquaintance. Posing as the friend, the scammer engaged in a video call with the victim and urgently requested a transfer of 4.3 million yuan, falsely claiming the funds were urgently needed for a bidding process. Unaware of the deception, the victim complied and transferred the requested amount.

The elaborate scheme began to unravel when the real friend expressed no knowledge of the situation, leaving the victim perplexed. It was at this point that he realized he had fallen victim to a deepfake scam. Fortunately, the local authorities in Baotou City successfully recovered most of the stolen funds and are actively pursuing the remaining amount.

This incident has raised concerns in China regarding the potential misuse of AI in financial crimes. While AI has brought significant advancements across various domains, its misapplication has become an increasingly worrisome issue. In a similar occurrence last month, criminals exploited AI to replicate a teenager's voice and extort ransom from her mother, generating shockwaves worldwide.

Jennifer DeStefano, a resident of Arizona, received a distressing call from an unknown number, drastically impacting her life. At the time, her 15-year-old daughter was on a skiing trip. When DeStefano answered the call, she recognized her daughter's voice, accompanied by sobbing. The situation escalated when a male voice threatened her and cautioned against involving the authorities.

In the background, DeStefano could hear her daughter's voice pleading for help. The scammer demanded a ransom of USD 1 million in exchange for the teenager's release. Convinced by the authenticity of her daughter's voice, DeStefano was deeply disturbed by the incident.

Fortunately, DeStefano's daughter was unharmed and had not been kidnapped. This incident underscored the disconcerting capabilities of AI, as fraudsters can exploit the technology to emotionally manipulate and deceive individuals for financial gain.

As AI continues to advance rapidly, it is imperative for individuals to maintain vigilance and exercise caution. These incidents emphasize the significance of robust cybersecurity measures and the need to raise public awareness regarding the risks associated with deepfake technology. Authorities worldwide are working tirelessly to combat these emerging threats and protect innocent individuals from falling victim to such sophisticated scams.

The incident in China serves as a stark reminder that as technological progress unfolds, increased vigilance and understanding are essential. Shielding ourselves and society from the misuse of AI is a collective responsibility that necessitates a multifaceted approach, encompassing technological advancements and the cultivation of critical thinking skills.

These cases illustrate the potential exploitation of AI for financial crimes. It is crucial to remain cognizant of the potential risks as AI technology continues to evolve.