[ad_1]

Person using ChatGPT

ChatGPT caused a stir with its user-friendly interface and believable AI-generated responses. With just one prompt, ChatGPT provided detailed answers that other AI assistants hadn’t gotten. Powered by a massive data set that ChatGPT was trained on, the scope and variety of topics it could cover quickly amazed the tech industry and the public.

However, the sophistication of the technology raises an inevitable question: what are the disadvantages of ChatGPT and similar technologies? With capabilities to generate a multitude of realistic responses, ChatGPT could be used to create a host of responses capable of tricking an unassuming reader into believing that a real human is behind the content.

Understanding ChatGPT

When you think of AI assistants, Google and Alexa might come to mind first. You ask a question like “How’s the weather today?” and get a short response. While some chat features are available, most do not produce long, AI-generated text that can be expanded by further interaction.

ChatGPT takes a complex prompt and generates a full response, potentially spanning multiple paragraphs. He remembers the previous conversation and builds on it when asked more questions, providing more detailed answers.

An example is shown below: ChatGPT lists instructions for making a chocolate cake.

ChatGPT providing instructions on how to bake a cake
ChatGPT providing instructions on how to bake a cake
Source: Specops

This conversation can range from a simple list of ingredients to a full-fledged short story. However, ChatGPT may not respond to some topics due to its data set.

Also, it’s important to note that just because an answer is generated doesn’t mean it’s accurate. ChatGPT doesn’t “think” like us; it uses data from the internet to generate a response. Therefore, incorrect data and associated biases can lead to an unexpected response.

The risks of AI assistants

As mentioned above, not all answers are equal. But that’s not the only risk. ChatGPT responses can be compelling, and therefore, the potential to generate content that could trick an individual into thinking it was written by someone exists.

This can lead to the weaponization of this technology in the tech world.

Many phishing emails are easily recognizable, especially when written by non-native speakers. However, ChatGPT could make the task much easier and more compelling as shown in this article by Check Point.

The speed of generation and quality of response opens the door to much more credible phishing emails and even simple generation of exploit code.

Additionally, since the ChatGPT model is open-source, an enterprising individual could create a dataset of existing company-generated emails to create a tool that produces phishing emails quickly and easily.

ChatGPT writing password reset request from Microsoft
ChatGPT writing password reset request from Microsoft
Source: Specops

Phishing emails can provide a way for hackers to gain access to your system and deploy malware or ransomware, which can cause serious damage. As ChatGPT improves, it will become an increasingly powerful tool for malicious actors.

What about a non-native speaker trying to convince an employee, through conversation, to give up their credentials?

Although the existing ChatGPT interface has safeguards against requesting sensitive information, you can see how the AI ​​model helps inform the flow and dialogue that a Microsoft helpdesk technician might use, giving authenticity upon request.

Impersonate Microsoft Support
Impersonate Microsoft Support
Source: Specops

ChatGPT Abuse Protection

Have you received a suspicious email and wondering if it was written entirely or partially by ChatGPT? Fortunately, there are tools that can help you. These tools work on percentages, so they cannot provide absolute certainty. However, they can raise reasonable doubt and help you ask the right questions to determine if the email is genuine.

For example, after pasting the contents of the previously created password reset email, the GPT Zero tool highlights sections that could potentially be generated by an AI. It’s not foolproof detection, but it can make you think and question the authenticity of any given content.

Detection of AI-generated content
Detection of AI-generated content
Source: Specops

You can use this tool to help identify AI-generated text: https://gptzero.me/.

Social engineering booming with ChatGPT

From bogus support requests to caller ID spoofing, and now even scripting with ChatGTP. The internet is full of resources to help promote successful social engineering programs. Threat authors advance social engineering attacks by combining multiple attack vectors, using ChatGPT alongside other social engineering methods.

ChatGPT can help attackers better create a fake identity, making their attacks more likely to succeed.

Stay safe with Specops Secure Service Desk

Specops Secure Service Desk helps prevent cyberattacks on your help desk, including social engineering attempts generated by ChatGPT. With Specops Secure Service Desk, you can ensure that a user is who they claim to be with a secure verification approach that goes beyond security questions, which can be easily found by cybercriminals during a targeted social engineering attack.

For example, in case a user calls the service desk for a password reset, the tool will ask the service desk agents to verify the user’s identity before resetting their password.

The user can be verified with a one-time code sent to the mobile number associated with their Active Directory account, or even with existing authentication services, such as Duo Security, Okta, PingID and Symantec VIP. Organizations can also overlay these options to enforce MFA authentication at the service desk.

Specops Service Desk
Specops Service Desk

With vishing scams showing no signs of slowing down and ChatGTP poised to evolve with AI technology, investing in the Specops Secure Service Desk solution could be a vital step for organizations looking to protect.

The future of ChatGPT and securing your users

ChatGPT is a game-changer by providing a powerful and easy-to-use tool for AI-powered conversations. Although there are many potential applications, organizations should be aware of how attackers can use this tool to improve their tactics and the additional risks it can pose to their organization.

Sponsored and written by Specops software

[ad_2]

Source link