[ad_1]

Malicious alternative to ChatGPT advertised on the dark web

Threat actors are showing increased interest in generative AI tools, with hundreds of thousands of OpenAI credentials for sale on the dark web and access to a malicious alternative for ChatGPT.

Less skilled and seasoned cybercriminals can use the tools to create more compelling phishing emails that are personalized to the intended audience to increase the chances of an attack succeeding.

Hackers exploit GPT AI

In six months, dark web and Telegram users mentioned ChatGPT, OpenAI’s artificial intelligence chatbot, more than 27,000 times, shows data from To bursta threat exposure management company, shared with BleepingComputer.

Analyzing dark web forums and marketplaces, Flare researchers noticed that OpenAI credentials are among the latest products available.

Researchers have identified over 200,000 OpenAI credentials for sale on the dark web in the form of flight logs.

Compared to the estimated 100 million active users in January, the number seems insignificant, but it shows that threat actors see the potential for malicious activity in generative AI tools.

A report in June from cybersecurity firm Group-IB said that illicit dark web marketplaces were trading information-stealing malware logs containing over 100,000 ChatGPT accounts.

Cybercriminals’ interest in these utilities was piqued to the point that one of them developed a ChatGPT clone named WormGPT and trained it on malware-driven data.

The tool is advertised as the “best GPT alternative for blackhat” and a ChatGPT alternative “that lets you do all sorts of illegal things”.

WormGPT announced on cybercriminal forum
WormGPT dev tool promoted on cybercriminal forum
source: SlashNext

WormGPT relies on the large open source GPT-J language model developed in 2021 to produce human-like text. Its developer says it trained the tool on a diverse set of data, with a focus on malware-related data, but didn’t provide any guidance on specific data sets.

WormGPT shows potential for BEC attacks

Email security provider SlashNext was able to access WormGPT and ran some tests to determine the potential danger it poses.

Researchers focused on creating messages suitable for Business Email Compromise (BEC) attacks.

“In one experiment, we asked WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice,” the researchers said. explain.

“The results were disappointing. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showing its potential for sophisticated phishing and BEC attacks,” they concluded.

WormGPT creates a credible message for the BEC attack
WormGPT generates persuasive message for BEC attack
source: SlashNext

Analyzing the result, SlashNext researchers identified the advantages that generative AI can bring to a BEC attack: in addition to the “impeccable grammar” that lends legitimacy to the message, it can also allow less skilled attackers to carry out attacks above their level of sophistication.

While it can be difficult to defend against this emerging threat, businesses can prepare by training their employees to check for messages that need urgent attention, especially when a financial component is present.

Improved email verification processes should also pay off with alerts for messages outside the organization or flagging keywords commonly associated with a BEC attack.

[ad_2]

Source link