Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label financial scams. Show all posts

Scamfluencers Use Social Media to Orchestrate Sophisticated Online Fraud

 

Scamfluencers, a rising category of deceptive internet personalities, are leveraging their online influence to run sophisticated scams that have already cost Americans an estimated $1.9 billion in 2024. 

These individuals masquerade as experts in finance, health, or other trusted domains to exploit trust and extract money from their followers. By blending online popularity with calculated deceit, scamfluencers are proving to be one of the most dangerous forms of digital manipulation today. 

According to Adewale Adeife, a cybersecurity consultant at EY, scamfluencers are especially dangerous because they merge their social credibility with modern deception tactics. These often include emotional manipulation, fabricated social proof such as fake likes and engagement pods, and now, even AI-generated deepfakes to bolster their authority. Scamfluencers fabricate credentials, pose as professionals, and often use emotionally charged content to draw in followers. 

In one infamous example, teenager Malachi Love-Robinson posed as a medical doctor, tricking patients and professionals alike. Others may impersonate financial experts, promising “get-rich-quick” results backed by fake testimonials and limited-time offers. Tactics also include exploiting psychological tendencies like authority bias, where users are more likely to believe information from someone who appears famous or credentialed. 

Scamfluencers also use the consistency principle—starting with small asks that escalate into larger scams. Fear, greed, and urgency are common emotional triggers they use to lower victims’ skepticism. To protect yourself, cybersecurity experts recommend several steps. 

Always verify an influencer’s claims and professional background. Be wary of requests for unconventional payments such as cryptocurrency or gift cards. If the person reacts defensively to questions, or if their results seem too good to be true, it’s likely a red flag. If you suspect you’ve encountered a scamfluencer, stop communication immediately, save all evidence, report it to your financial institution, and file complaints with law enforcement and cybercrime units. 

Social media companies are stepping up their defenses, using AI to detect fake accounts, manipulated media, and suspicious behavior. Despite these efforts, experts emphasize that individual vigilance is still the best defense against scamfluencer tactics. 

In an increasingly digital world, where influence can easily be faked and trust weaponized, staying informed and skeptical is essential. Recognizing the signs of scamfluencers helps prevent fraud and contributes to creating a safer and more authentic online environment.

Tamil Nadu Police, DoT Target SIM Card Fraud in SE Asia with AI Tools

 

The Cyber Crime Wing of Tamil Nadu Police, in collaboration with the Department of Telecommunications (DoT), is intensifying efforts to combat online fraud by targeting thousands of pre-activated SIM cards used in South-East Asian countries, particularly Laos, Cambodia, and Thailand. These SIM cards have been linked to numerous cybercrimes involving fraudulent calls and scams targeting individuals in Tamil Nadu. 

According to police sources, investigators employed Artificial Intelligence (AI) tools to identify pre-activated SIM cards registered with fake documents in Tamil Nadu but active in international locations. These cards were commonly used by scammers to commit fraud by making calls to unsuspecting victims in the State. The scams ranged from fake online trading opportunities to fraudulent credit or debit card upgrades. A senior official in the Cyber Crime Wing explained that a significant discrepancy was observed between the number of subscribers who officially activated international roaming services and the actual number of SIM cards being used abroad. 

The department is now working closely with central agencies to detect and block suspicious SIM cards.  The use of AI has proven instrumental in identifying mobile numbers involved in a disproportionately high volume of calls into Tamil Nadu. Numbers flagged by AI analysis undergo further investigation, and if credible evidence links them to cybercrimes, the SIM cards are promptly deactivated. The crackdown follows a series of high-profile scams that have defrauded individuals of significant amounts of money. 

For example, in Madurai, an advocate lost ₹96.57 lakh in June after responding to a WhatsApp advertisement promoting international share market trading with high returns. In another case, a government doctor was defrauded of ₹76.5 lakh through a similar investment scam. Special investigation teams formed by the Cyber Crime Wing have been successful in arresting several individuals linked to these fraudulent activities. Recently, a team probing ₹38.28 lakh frozen in various bank accounts apprehended six suspects. 

Following their interrogation, two additional suspects, Abdul Rahman from Melur and Sulthan Abdul Kadar from Madurai, were arrested. Authorities are also collaborating with police in North Indian states to apprehend more suspects tied to accounts through which the defrauded money was transacted. Investigations are ongoing in multiple cases, and the police aim to dismantle the network of fraudsters operating both within India and abroad. 

These efforts underscore the importance of using advanced technology like AI to counter increasingly sophisticated cybercrime tactics. By addressing vulnerabilities such as fraudulent SIM cards, Tamil Nadu’s Cyber Crime Wing is taking significant steps to protect citizens and mitigate financial losses.

UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.