OpenAI, the artificial intelligence company behind the powerful GPT-4 language model, has disclosed that it detected and terminated several accounts associated with state-sponsored hackers who used its platform for malicious purposes. The hackers included Russian military intelligence and Chinese-backed groups, as well as actors linked to Iran and North Korea, according to OpenAI.
How did the hackers use OpenAI?
OpenAI said that the hackers mainly used its chatbot feature to access open-source information, such as satellite communication protocols, and to translate content into the local languages of their targets. They also used GPT-4 to find coding errors and run basic coding tasks. Some of the hackers also used the language model to craft phishing emails, which are designed to trick recipients into clicking on malicious links or attachments.
How did OpenAI and Microsoft discover and stop the attacks?
OpenAI said that it investigated in collaboration with Microsoft, its major financial backer and cloud provider. The two companies used various techniques to identify and terminate the hackers’ accounts, such as analyzing the usage patterns, content, and metadata of the queries. They also shared information and best practices with other AI providers and researchers to prevent similar attacks in the future.
What are the implications of the attacks?
The attacks reveal large language models’ potential risks and challenges, which can generate realistic and coherent texts on various topics and tasks. While these models can have many positive applications, such as education, entertainment, and health care, they can also be abused by malicious actors who seek to exploit their capabilities for cyberattacks, disinformation, and propaganda.
However, OpenAI and Microsoft said that they have not yet observed any novel or unique AI-enabled attack or abuse techniques resulting from the hackers’ usage of GPT-4. They also cited a recent British government study that concluded that large language models may boost the skills of novice hackers but are of little benefit to advanced threat actors who already have sophisticated tools and methods.
What are the next steps for OpenAI and Microsoft?
OpenAI and Microsoft said that they will continue to monitor and mitigate the threats posed by the misuse of large language models and to work with the AI community and policymakers to establish ethical and responsible standards for AI development and deployment. They also said they will be transparent and accountable to the public about their findings and actions.
Meanwhile, the relationship between OpenAI and Microsoft is under scrutiny by multiple national antitrust authorities, who are concerned about the potential market dominance and influence of the two companies in the AI sector.
Other state-sponsored hackers who used ChatGPT
Besides the Russian and Chinese hackers, OpenAI also identified two other groups of state-sponsored hackers who used its chatbot feature for malicious purposes. These groups are affiliated with Iran and North Korea and have different objectives and targets.
Charcoal Typhoon: A Chinese hacker group that targets Asian countries and critics of China
One of the Chinese hacker groups that used ChatGPT is known as Charcoal Typhoon. This group used the language model to research companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns. The group targets sectors including government, higher education, communications infrastructure, oil and gas, and information technology. It primarily focuses on Asian countries and those that oppose China’s policies. It is also called Aquatic Panda, ControlX, RedHotel, and Bronze University.
Salmon Typhoon: A Chinese hacker group that resurfaced after a decade of inactivity
Another Chinese hacker group that used ChatGPT is known as Salmon Typhoon. This group used the language model to translate technical papers, retrieve publicly available information on intelligence agencies and regional threat actors, code, summarize technical papers, and research common ways to hide processes on a system. Also called Sodium, APT4, and Maverick Panda, the threat actor has previously targeted U.S. defense contractors, government agencies, and entities within the cryptographic technology sector. It recently resurfaced after having been dormant for over a decade.
“This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies,” Microsoft said.
Crimson Sandstorm: An Iranian hacker group that targets various sectors and regions
An Iranian hacker group that used ChatGPT is known as Crimson Sandstorm. This group used the language model to script support related to app and web development, generate content likely for spear-phishing campaigns, and research common ways for malware to evade detection. Potentially connected to the Islamic Revolutionary Guard Corps and active since at least 2017, Crimson Sandstorm targets victims in the defense, maritime shipping, transportation, healthcare, and technology sectors. It is also called Tortoiseshell, Imperial Kitten, and Yellow Liderc.
Emerald Sleet: A North Korean hacker group that targets defense experts and organizations
A North Korean hacker group that used ChatGPT is known as Emerald Sleet. This group used the language model to find experts and organizations focused on defense issues in the Asia-Pacific region, seek publicly available information about vulnerabilities, obtain help with basic scripting tasks, and draft content that could be used in phishing campaigns. “Highly active” in 2023, the hacker group impersonated academic institutions and nongovernmental organizations to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. The group is also known as Kimsuky and Velvet Chollima.
Forest Blizzard: A Russian hacker group that targets defense and government sectors
A Russian hacker group that used GPT-4 is known as Forest Blizzard. This group used the language model for open-source research into satellite communication protocols and radar imaging technology and support with scripting tasks – such as file manipulation, data selection, regular expressions, and multiprocessing – to potentially automate or optimize technical operations. The military intelligence actor, also called APT28 and Fancy Bear, focuses on victims in defense, transportation, government, energy, nongovernmental organizations, and information technology. The threat group has been “extremely active” in targeting organizations involved in and related to the Russia-Ukraine war.
How OpenAI and Microsoft are responding to the threats
OpenAI and Microsoft said that they are taking various measures to prevent and mitigate the misuse of large language models by state-sponsored hackers and other malicious actors. They said that they have set up a team to detect and neutralize threats and that they work with the broader AI industry to exchange information. They also said that they are developing and promoting ethical and responsible standards for AI development and deployment.
“Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries,” Microsoft said.
However, OpenAI also acknowledged that it “will not be able to stop every instance” of illicit activity and that it relies on the cooperation and vigilance of the AI users and the public to report any suspicious or harmful behavior.