[ad_1]

AI on a CPU

A quick search of “ChatGPT” on the dark web and Telegram shows 27,912 mentions over the past six months.

Much has been written about the potential for threat actors to use language models. With major open source language models (LLMs) such as LLaMA and Orca, and now the WormGPT cybercrime model, trends around the commodification of cybercrime and the growing capabilities of the models will collide.

Threat actors are already engaging in rigorous discussions about how language models can be used for everything from identifying 0-day exploits to crafting Phishing emails.

Open source models represent a particularly attractive opportunity for threat actors because they have not undergone human feedback reinforcement learning (RLHF) focused on preventing risky or illegal responses.

This allows threat actors to actively use them to identify 0 days, write spear phishing emails, and perform other types of cybercrime without the need for jailbreaks.

Threat Exposure Management Company To burst identified more … than 200,000 OpenAI credentials currently sold on the dark web as flight logs.

While this is undoubtedly concerning, the statistic is only beginning to scratch the surface of threat actor interests in the ChatGPT, GPT-4, and AI language models more broadly.

Login page for a large-scale phishing platform
Login page for a large-scale phishing platform
Source: Flare

Confronting the Trends: The Cybercrime Ecosystem and Open Source AI Language Models

Over the past five years, there has been a dramatic growth in the commodification of cybercrime. A vast underground network now exists through Tor and Illegal Telegram channels in which cybercriminals buy and sell personal information, network access, data leaks, credentials, infected devices, attack infrastructure, ransomware and more.

Commercially-minded cybercriminals will likely increasingly use rapidly proliferating open-source AI language models. The first application of this type, WormGPT, has already been created and is sold for a monthly access fee.

Massive Custom Spear-Phishing

Phishing-aa-Service (PhaaS) already exists and provides an out-of-the-box infrastructure to launch phishing campaigns from a monthly subscription.

There are already extensive discussions among threat actors using WormGPT to facilitate broader, personalized phishing attacks.

The use of generative AI will likely allow cybercriminals to launch attacks against thousands of users with personalized messages gleaned from data from social media accounts, Sources OSINTand online databases, dramatically increasing the threat of phishing email to employees.

Threat actor explains WormGPT
Threat actor explains WormGPT
Source: Flare

“Tomorrow, API-WormGPT will be provided by Galaxy dev channel, request status is unlimited and will be calculated periodically, and to use API-WORMGPT, you need to get API-KEY. The latest news will be announced,” WormGPT threat actor announces on Telegram.

“If you don’t know what is WORMGPT: This WORMGPT is an unlimited version of CHATGPT, designed by hackers and designed for illegal work, like phishing and malware etc. without any ethical source.”

Automated identification of exploits and exposures

Projects such as BabyAGI seek to use language models to loop through thoughts and perform actions online, and potentially in the real world. As things stand, many organizations don’t have full visibility into their attack surface.

They rely on threat actors not quickly identifying unpatched services, exposed credentials and API keys in public GitHub repositories, and other forms of high-risk data exposure.

Semi-autonomous language models could quickly and abruptly alter the threat landscape by automating large-scale exposure detection for threat actors.

Today, threat actors rely on a combination of tools used by cybersecurity professionals and manual effort to identify the exposure that may grant initial access to a system.

We’re likely years or even months away from systems that can not only detect obvious exposure such as credentials in a repository, but even identify new 0-day exploits in applications, dramatically reducing the time security teams have to respond to exploits and data exposures.

Vishing and Deepfakes

Advances in generative AI also seem poised to create an extremely difficult environment for vishing attacks. AI-driven services can already realistically copy the sound of an individual’s voice with less than 60 seconds of audio, and deepfake technology continues to improve.

At present, Deep Counterfeits remain in the Strange Valley, making them somewhat obvious. However, the technology is advancing rapidly and researchers continue to create and deploy other open source projects.

WormGPT responds to a prompt to write sample malware in python
WormGPT responds to a prompt to write sample malware in python
Source: Flare

Hacking and Malware Generative AI Models

There are already open source LLMs focused on red team activities such as GPT test pen.

The functionality and specialization of a model largely depends on a multi-step process involving the data on which the model is trained, reinforcement learning with human feedback, and other variables.

“There are promising open source models like orca that promise to be able to find 0days if it has been set to code,” says a threat actor discussing Microsoft’s Orca LLM.

What does this mean for security teams?

Your margin of error as a defender is about to drop dramatically. Reducing SOC noise to focus on high value events and improving Mean Time to Detection (MTTD) and Mean Time to Response (MTTR) for high-risk exposure, whether on the dark web or the clear web, should be a priority.

Adoption of AI for enterprise security will likely scale much more slowly than for attackers, creating an asymmetry that adversaries will attempt to exploit.

Security teams need to implement an effective attack surface management program, ensure employees receive thorough training on deepfakes and spear-phishing, but beyond that, assess how AI can be used to quickly detect and remediate gaps in your security perimeter.

Security is only as strong as the weakest link, and AI is about to make that weak link much easier to find.

About Eric Clay

Eric is a security researcher at To burst, a threat exposure monitoring platform. He has experience in security data analysis, security research and applications of AI in cybersecurity.

Sponsored and written by To burst

[ad_2]

Source link