OpenAI is seeking the help of ethical hackers in a new bug bounty program, offering up to $30,000 (US$20,000) to those who discover security flaws in its flagship AI chatbot, ChatGPT.

After ChatGPT suffered its first major data leak in late March, the trending chat tool's developer OpenAI has announced a bug bounty program which invites "security researchers, ethical hackers, and technology enthusiasts" to help identify vulnerabilities in its systems.

The program offers rewards on a sliding scale from $300 (US$200) for "low-severity findings", up to a whopping $30,000 (US$20,000) for "exceptional discoveries".

"As we create technology and services that are secure, reliable, and trustworthy, we need your help," said OpenAI.

"We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems.

"By sharing your findings, you will play a crucial role in making our technology safer for everyone," it added.

Launched in partnership with the security crowdsourcing platform BugCrowd, OpenAI is offering
"bug bounties" not only for vulnerabilities in ChatGPT itself, but also in API targets, API keys and services for the OpenAI research organisation.

The company specifically listed logins, subscriptions and plugins as areas of interest for the bounty program, and provided a gamut of examples which fall within the project's scope, including data exposure, authentication/authorisation issues, and payments issues.

"OpenAI is committed to making AI safe and useful for everyone.

"While we work hard to prevent risks, we can't predict every way people will use or misuse our technology in the real world," said OpenAI.

"We appreciate the contributions of ethical hackers who help us uphold high privacy and security standards for our users and technology," it said.

The AI leader also asked for help regarding potential data exposure through third-party vendors.

In its bug bounty program page, OpenAI provided examples of vendors which qualify in the program's third-party category, including Google Workspace, Trello, Jira, Monday.com, Stripe, Hubspot and more.

The company was also strict in defining areas which would not qualify for the program – such as brute force attacks and attacks stemming from leaked credentials – and further warned participants of potential legal considerations for efforts involving third-party products.

"Please understand that, while we can authorise your research on OpenAI’s systems and services, we cannot authorise your efforts on third-party products or guarantee they won’t pursue legal action against you," said OpenAI.

"You are expected, as always, to comply with all applicable laws."

So far, 29 vulnerabilities have been rewarded from the bug bounty program with an average payout of over $1,000 (US$694.73).

"Play fair, and remember that we can see what you're doing," said OpenAI.

All eyes on ChatGPT

ChatGPT is said to have reached 100 million active users in January alone – and while the chatbot's skyrocketing popularity has solidified OpenAI as a leader in AI development, it has also exposed the company to ample scrutiny from security and industry experts.

In March, ChatGPT suffered its first major data leak when limited user chat history and payment information was exposed due to a bug.

Soon after, Italy issued a temporary ban against ChatGPT citing concerns over its data privacy practices and March data breach.

Meanwhile, AI companies have been facing pressure to slow "out-of-control" AI development amid growing fears for the technology's ethical and economic implications.

In an open letter signed by over 1,300 academics, tech and business experts, prominent voices such as Elon Musk and Steve Wozniak backed calls for a six-month freeze on AI development.

Since this March letter was published, OpenAI CEO Sam Altman confirmed the company is not training GPT-5 – the widely anticipated successor to its AI language model GPT-4.

"I think the letter is missing the most technical nuance about where we need to pause … An earlier version of the letter claims we are training GPT-5 right now. We are not, and won't for some time," said Altman.

"So, in that sense, it was sort of silly. We are doing things on top of GPT-4 that I think have all sorts of safety issues that we need to address," he said.

In addition to launching its new bug bounty program, OpenAI announced it is currently hiring for open security roles available on the company's career page.