Cybersecurity firm Sophos has now made the users acknowledge the case of fake ChatGPT apps. It claims that downloading these apps can be risky, that they have almost no functionality, and that they are continually sending advertisements. According to the report, these apps lure unaware users into subscribing for a subscription that can costs hundreds of dollars annually.
Sophos refers these fake ChatGPT apps as fleeceware, describing them as ones that bombard users with adverts until they give in and purchase the subscription. These apps are purposefully made to only be used for a short period of time after the free trial period ends, causing users to remove them without realizing they are still obligated to make weekly or monthly membership payments.
According to the report, five investigated bogus ChatGPT apps with names like "Chat GBT" were available in order to deceive users and increase their exposure in the Google Play or App Store rankings. The research also claimed that whereas these fake apps charged users ranging from $10 per month to $70 per year, OpenAl's ChatGPT offers key functionality that could be used for free online. Another scam app named Genie lured users into subscribing for $7 weekly or $70 annually, generating $1 million in income over the previous month.
“Scammers have and always will use the latest trends or technology to line their pockets. ChatGPT is no exception," said Sean Gallagher, principal threat researcher, Sophos. "With interest in AI and chatbots arguably at an all-time high, users are turning to the Apple App and Google Play Stores to download anything that resembles ChatGPT. These types of scam apps—what Sophos has dubbed ‘fleeceware’—often bombard users with ads until they sign up for a subscription. They’re banking on the fact that users won’t pay attention to the cost or simply forget that they have this subscription. They’re specifically designed so that they may not get much use after the free trial ends, so users delete the app without realizing they’re still on the hook for a monthly or weekly payment."
While some of the bogus ChatGPT fleeceware have already been tracked and removed from the app stores, they are expected to resurface in the future. Hence, it is recommended for users to stay cautious of these fake apps, and make sure that the apps they are downloading are legitimate.
For users who have already download these apps are advised to follow protocols provided by the App Store or Google Play store on how to “unsubscribe,” since just deleting the bogus apps would not cancel one’s subscription.
Business professionals began utilizing ChatGPT and other generative AI tools in an enterprise setting as soon as they were made available in order to complete their tasks more quickly and effectively. For marketing directors, generative AI creates PR pitches; for sales representatives, it creates emails for prospecting. Business users have already incorporated it into their daily operations, despite the fact that data governance and legal concerns have surfaced as barriers to official company adoption.
With tools like GitHub Copilot, developers have been leveraging generative AI to write and enhance code. A developer uses natural language to describe a software component, and AI then generates working code that makes sense in the developer's context.
The developer's participation in this process is essential since they must carry the technical knowledge to ask the proper questions, assess the software that is produced, and integrate it with the rest of the code base. These duties call for expertise in software engineering.
Traditionally, security teams have focused on the applications created by their development organizations. However, users still fall prey to believing that these innovative business platforms are a ready-made solution, where in actual sense they have become application development platforms that power many of our business-critical applications. Bringing citizen developers within the security umbrella is still a work in progress.
With the growing popularity of generative AI, even more users will be creating applications. Business users are already having discussions about where data is stored, how their apps handle it, and who can access it. Errors are inevitable if we leave the new developers to make these decisions on their own without providing any kind of guidance.
Some organizations aim to ban citizen development or demand that commercial users obtain permission before using any applications or gaining access to any data. That is a sensible response, however, given the enormous productivity gains for the company, one may find it hard to believe it would be successful. A preferable strategy would be to establish automatic guardrails that silently address security issues and give business users a safe method to employ generative AI through low-code/no-code, allowing them to focus on what they do best: push the business forward.