Open AI

Artificial intelligence research company OpenAI today announced the launch of a new bug bounty program to allow registered security researchers to discover vulnerabilities in its product line and get paid to report them through the participatory security platform Bugcrowd.

As the company revealed today, rewards are based on the severity and impact of reported issues, and they range from $200 for low-severity security vulnerabilities to $20,000 for exceptional discoveries.

“THE OpenAI Bug Bounty Program is a way for us to recognize and reward the valuable knowledge of security researchers who help keep our technology and business secure,” OpenAI said.

“We invite you to report any vulnerabilities, bugs or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”

However, while the OpenAI application programming interface (API) and its artificial intelligence chatbot ChatGPT are targeted targets for bounty hunters, the company has asked researchers to report model issues via a separate form. , unless they have a security impact.

“Model security issues do not fit well into a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Resolving these issues often involves substantial research and an approach wider,” OpenAI said.

“To ensure these concerns are properly addressed, please report them. using the appropriate form, rather than submitting them through the bug bounty program. Bringing them back to the right place allows our researchers to use those reports to improve the model.”

Other out-of-scope issues include jailbreaks and security bypasses that ChatGPT users have exploited to trick the ChatGPT chatbot into ignoring protections implemented by OpenAI engineers.

Announcing the OpenAI bug bounty

Last month, OpenAI revealed a ChatGPT Payment Data Leak the company has blamed a bug on the open source library bug of the Redis client used by its platform.

Due to the bug, ChatGPT Plus subscribers started seeing other users’ email addresses on their subscription pages. Following a growing stream of user reports, OpenAI took the ChatGPT bot offline to investigate an issue.

In a post-mortem published a few days later, the company explained that the bug caused the ChatGPT service to expose the chat requests and personal information of approximately 1.2% of Plus subscribers.

Information exposed included subscriber names, email addresses, payment addresses, and partial credit card information.

“The bug was discovered in the open source Redis client library, redis-py. As soon as we identified the bug, we contacted Redis maintainers with a patch to address the issue,” OpenAI said.

While the company didn’t tie today’s announcement to this recent incident, the issue potentially would have been discovered sooner, and the data leak could have been avoided if OpenAI already had a bug bounty program. running to allow researchers to test its products for security vulnerabilities.


Source link