It was only a matter of time before hackers would start using the growing popularity of ChatGPT to spread malware and steal sensitive personal information. Recently, multiple security firms have reported these attacks.
For those unaware, ChatGPT is an AI chatbot developed by OpenAI that has become extremely popular in recent months. Its unique output and Microsoft’s investment have made it one of the most sought-after technologies online, with over 100 million users in just two months (November 2022 to January 2023), according to BleepingComputer.
This high demand has inevitably led to the monetization of the service, with users having to pay $20 per month for uninterrupted access. However, cybersecurity professionals have identified various hacking campaigns offering free access. These offers are clearly too good to be true and should be approached with caution.
For example, threat actors have been promoting Redline, a notorious infostealer capable of stealing passwords and credit card data, taking screenshots, and exfiltrating files. These hackers created a bogus website to promote unlimited access to ChatGPT and even set up a Facebook page to advertise it. Other hackers have tried to distribute the Aurora stealer.
Additionally, fake ChatGPT apps are being distributed on Google Play and other third-party Android app stores. It is important to note that users would not gain access to the chatbot but instead unknowingly download various forms of malware. Researchers from Cyble have discovered more than 50 such apps.
It should be emphasized that the official website – https://chat.openai.com/ – and OpenAI’s APIs are the only legitimate ways to access ChatGPT. All other “alternatives” are not credible and could compromise smartphone security and user privacy.