For Finding Security Weaknesses, OpenAI Will Pay Researchers Up to $20,000

For Finding Security Weaknesses, OpenAI Will Pay Researchers Up to $20,000

OpenAI, the startup behind the popular ChatGPT AI writer, has announced the launch of a new bug bounty program with some pretty significant rewards for the most exceptional discoveries.

Cash-based rewards are set to range from $200 for low-severity findings to as much as $20,000, with participants asked to focus on vulnerabilities, bugs, and security flaws.

The company says it’s doing this to harness a more transparent and collaborative environment which is an important step in opening up the technology amid speculations of potential large language model (LLM) misuse.

OpenAI bounty program

Security researchers, ethical hackers, and technology enthusiasts are all being asked to come together and help OpenAI to find – and understand – its flaws. A dedicated Bugcrowd page has been set up to handle submissions and rewards.

Researchers are being asked not to submit model safety issues via the bug bounty program, and instead to submit them via a separate form. OpenAI says this is because investigating such issues requires huge amounts of research by specialists, and thus beyond the scope of the bounty program that offers up to $20,000.

OpenAI explains:Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed.

However, other security bugs pertaining to ChatGPT are within the scope for bounties, along with API targets, third-party corporate targets, OpenAI API keys, OpenAI Research Org, and other OpenAI targets. Each category has its own tiers for reward paybacks, and not all are eligible for the full $20,000.

Previous ArticleNext Article

Related Posts