Southeast Asia has emerged as a global hotspot for cybercrimes, where human trafficking and high-tech fraud collide. Criminal syndicates operate large-scale "pig butchering" operations in nations like Cambodia and Myanmar, which are scam centres manned by trafficked individuals compelled to defraud victims in affluent markets like Singapore and Hong Kong.
The scale is staggering: one UN estimate puts the global losses from these scams at $37 billion. And things may soon get worse. The spike in cybercrime in the region has already had an impact on politics and policy. Thailand has reported a reduction in Chinese visitors this year, after a Chinese actor was kidnapped and forced to work in a Myanmar-based scam camp; Bangkok is now having to convince tourists that it is safe to visit. Singapore recently enacted an anti-fraud law that authorises law enforcement to freeze the bank accounts of scam victims.
But why has Asia become associated with cybercrime? Ben Goodman, Okta's general manager for Asia-Pacific, observes that the region has several distinct characteristics that make cybercrime schemes simpler to carry out. For example, the region is a "mobile-first market": popular mobile messaging apps including WhatsApp, Line, and WeChat promote direct communication between the fraudster and the victim.
AI is also helping scammers navigate Asia's linguistic variety. Goodman observes that machine translations, although a "phenomenal use case for AI," can make it "easier for people to be baited into clicking the wrong links or approving something.”
Nation-states are also becoming involved. Goodman also mentions suspicions that North Korea is hiring fake employees at major tech companies to acquire intelligence and bring much-needed funds into the isolated country.
A new threat: Shadow AI
Goodman is concerned about a new AI risk in the workplace: "shadow" AI, which involves individuals utilising private accounts to access AI models without firm monitoring. That could be someone preparing a presentation for a company review, going into ChatGPT on their own personal account, and generating an image.
This can result in employees unintentionally submitting private information to a public AI platform, creating "potentially a lot of risk in terms of information leakage. The lines separating your personal and professional identities may likewise be blurred by agentic AI; for instance, something associated with your personal email rather than your business one.
And this is when it gets tricky for Goodman. Because AI agents have the ability to make decisions on behalf of users, it's critical to distinguish between users acting in their personal and professional capacities.
“If your human identity is ever stolen, the blast radius in terms of what can be done quickly to steal money from you or damage your reputation is much greater,” Goodman warned.