More than 25,000 records of compromised OpenAI ChatGPT credentials surfaced for sale on the dark web between January and October 2023, according to numbers from Group-IB.
The group explained in its “Hi-Tech Crime Trends 2023/2024” report, released last week, that “the number of infected devices decreased slightly in mid- and late summer but grew significantly between August and September.”
The compromised credentials were detected in logs linked to information-stealing malware, specifically LummaC2, Raccoon, and RedLine stealer. Group-IB’s findings show that LummaC2 compromised 70,484 hosts, Raccoon affected 22,468 hosts, and RedLine targeted 15,970 hosts.
From June to October 2023, over 130,000 unique hosts connected to OpenAI ChatGPT were compromised, marking a 36% surge from the figures recorded in the initial five months of the year.
“The sharp increase in the number of ChatGPT credentials for sale is due to the overall rise in the number of hosts infected with information stealers, data from which is then put up for sale on markets or in UCLs,” Group-IB said.
They say that bad actors are refocusing their attention from corporate computers to public AI systems.
“This gives them access to logs with the communication history between employees and systems, which they can use to search for confidential information (for espionage purposes), details about internal infrastructure, authentication data (for conducting even more damaging attacks), and information about application source code.”
The news comes on the heels of a report from Microsoft where the company similarly reported that threat actors “are looking at AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could advance their objectives and attack techniques.”
It also acknowledged that “cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.”
The company, however, highlighted that its “research with OpenAI has not identified significant attacks employing the LLMs we monitor closely.”