GPT-5 bio bug bounty call
邀请函
作为我们持续加强生物领域先进人工智能能力安全保障工作的组成部分
[https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/],
我们现推出GPT-5生物漏洞赏金计划,现开始接受申请。我们已部署GPT-5,并正积极努力进一步强化该模型及其他模型的安全防护。我们诚邀具备AI红队测试、安全或化学与生物风险经验的研究人员,尝试寻找一种通用的“越狱”提示,能够破解我们设定的十级生物/化学安全挑战。
项目概览
● 适用模型:仅限GPT-5。
● 挑战内容:找到一个通用的“越狱”提示,能在全新对话中成功回答所有十个生物/化学安全问题,且不触发内容审核。
● 奖励:
• 首个成功通过全部十个问题的通用“越狱”提示奖励25,000美元。
• 首个使用多个“越狱”提示回答全部十个问题的团队奖励10,000美元。
• 对部分成功案例,我们可酌情授予较小奖励。
● 时间安排:申请自2025年8月25日开放,滚动接受申请,截止至2025年9月15日。测试将于2025年9月16日开始。
● 访问权限:申请及参与均需邀请。我们将向经过审核的可信生物红队成员发出邀请,并审查新申请。成功入选者将被引导加入生物漏洞赏金平台。
● 保密协议:所有提示、回复、发现及沟通内容均受保密协议约束。
如何申请
请于2025年9月15日前通过此链接提交简短申请(新窗口打开)
[https://openai.smapply.org/prog/gpt-5_bio_bug_bounty_program/],
内容包括姓名、所属机构、简要经历及150字的计划说明。
被接受的申请者及合作方需拥有现有ChatGPT账户,并签署保密协议。
立即申请,助力我们打造更安全的前沿人工智能。
申请参加OpenAI GPT-5生物漏洞赏金计划
申请链接(新窗口打开)
[https://openai.smapply.org/prog/gpt-5_bio_bug_bounty_program/]
----------------------
Invitation
As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5 and accepting applications. We’ve deployed GPT‑5 and are actively working to further strengthen safety protections for this and other models. We’re inviting researchers with experience in AI red teaming, security, or chemical and biological risk to try to find a universal jailbreak that can defeat our ten-level bio/chem challenge.
Program overview
● Model in scope: GPT‑5 only.
● Challenge: Identify one universal jailbreaking prompt to successfully answer all ten bio/chem safety questions from a clean chat without prompting moderation.
● Rewards: • $25,000 to the first true universal jailbreak to clear all ten questions.
• $10,000 to the first team that answers all ten questions with multiple jailbreak prompts.
• Smaller awards may be granted for partial wins at our discretion.
● Timeline: Applications open August 25, 2025 with rolling acceptances, and close on September 15th, 2025. Testing begins September 16th, 2025.
● Access: Application and invite-only. We will extend invitations to a vetted list of trusted bio red-teamers and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform.
● Disclosure: All prompts, completions, findings, and communications are covered by NDA.
How to apply
Submit a short application here (name, affiliation, brief track record, and a 150-word plan) by September 15, 2025. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA.
Apply now and help us make frontier AI safer.
Apply to the OpenAI GPT-5 bio bug bounty
Apply
via OpenAI News
邀请函
作为我们持续加强生物领域先进人工智能能力安全保障工作的组成部分
[https://openai.com/index/preparing-for-future-ai-capabilities-in-biology/],
我们现推出GPT-5生物漏洞赏金计划,现开始接受申请。我们已部署GPT-5,并正积极努力进一步强化该模型及其他模型的安全防护。我们诚邀具备AI红队测试、安全或化学与生物风险经验的研究人员,尝试寻找一种通用的“越狱”提示,能够破解我们设定的十级生物/化学安全挑战。
项目概览
● 适用模型:仅限GPT-5。
● 挑战内容:找到一个通用的“越狱”提示,能在全新对话中成功回答所有十个生物/化学安全问题,且不触发内容审核。
● 奖励:
• 首个成功通过全部十个问题的通用“越狱”提示奖励25,000美元。
• 首个使用多个“越狱”提示回答全部十个问题的团队奖励10,000美元。
• 对部分成功案例,我们可酌情授予较小奖励。
● 时间安排:申请自2025年8月25日开放,滚动接受申请,截止至2025年9月15日。测试将于2025年9月16日开始。
● 访问权限:申请及参与均需邀请。我们将向经过审核的可信生物红队成员发出邀请,并审查新申请。成功入选者将被引导加入生物漏洞赏金平台。
● 保密协议:所有提示、回复、发现及沟通内容均受保密协议约束。
如何申请
请于2025年9月15日前通过此链接提交简短申请(新窗口打开)
[https://openai.smapply.org/prog/gpt-5_bio_bug_bounty_program/],
内容包括姓名、所属机构、简要经历及150字的计划说明。
被接受的申请者及合作方需拥有现有ChatGPT账户,并签署保密协议。
立即申请,助力我们打造更安全的前沿人工智能。
申请参加OpenAI GPT-5生物漏洞赏金计划
申请链接(新窗口打开)
[https://openai.smapply.org/prog/gpt-5_bio_bug_bounty_program/]
----------------------
Invitation
As part of our ongoing efforts to strengthen our safeguards for advanced AI capabilities in biology, we’re introducing a Bio Bug Bounty for GPT‑5 and accepting applications. We’ve deployed GPT‑5 and are actively working to further strengthen safety protections for this and other models. We’re inviting researchers with experience in AI red teaming, security, or chemical and biological risk to try to find a universal jailbreak that can defeat our ten-level bio/chem challenge.
Program overview
● Model in scope: GPT‑5 only.
● Challenge: Identify one universal jailbreaking prompt to successfully answer all ten bio/chem safety questions from a clean chat without prompting moderation.
● Rewards: • $25,000 to the first true universal jailbreak to clear all ten questions.
• $10,000 to the first team that answers all ten questions with multiple jailbreak prompts.
• Smaller awards may be granted for partial wins at our discretion.
● Timeline: Applications open August 25, 2025 with rolling acceptances, and close on September 15th, 2025. Testing begins September 16th, 2025.
● Access: Application and invite-only. We will extend invitations to a vetted list of trusted bio red-teamers and review new applications. Once selected, successful applicants will be onboarded to the bio bug bounty platform.
● Disclosure: All prompts, completions, findings, and communications are covered by NDA.
How to apply
Submit a short application here (name, affiliation, brief track record, and a 150-word plan) by September 15, 2025. Accepted applicants and collaborators must have existing ChatGPT accounts to apply, and will sign a NDA.
Apply now and help us make frontier AI safer.
Apply to the OpenAI GPT-5 bio bug bounty
Apply
via OpenAI News