Announcing the OpenAI Safety Fellowship
Today we are announcing a call for applications to the OpenAI Safety Fellowship, a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems. The program will run from September 14, 2026 through February 5, 2027.
We are looking for applicants interested in safety questions that matter for existing and future systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others. We are especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community.
Fellows will work closely with OpenAI mentors and engage with a cohort of peers. Workspace will be available in Berkeley alongside other fellows at Constellation, though fellows may also work remotely. Fellows are expected to produce a substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship includes a monthly stipend, compute support, and ongoing mentorship.
We welcome applicants from a range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and related fields. We prioritize research ability, technical judgment, and execution over specific credentials. Letters of reference will be required.
For additional information regarding eligibility, compensation and benefits, see the application form. Fellows will receive API credits and other resources as appropriate, but will not have internal system access.
Applications are now open here and will close May 3. We will review all submissions and notify successful applicants by July 25. For any questions about the application process, please contact [email protected] (
via OpenAI News
Today we are announcing a call for applications to the OpenAI Safety Fellowship, a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems. The program will run from September 14, 2026 through February 5, 2027.
We are looking for applicants interested in safety questions that matter for existing and future systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others. We are especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community.
Fellows will work closely with OpenAI mentors and engage with a cohort of peers. Workspace will be available in Berkeley alongside other fellows at Constellation, though fellows may also work remotely. Fellows are expected to produce a substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship includes a monthly stipend, compute support, and ongoing mentorship.
We welcome applicants from a range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and related fields. We prioritize research ability, technical judgment, and execution over specific credentials. Letters of reference will be required.
For additional information regarding eligibility, compensation and benefits, see the application form. Fellows will receive API credits and other resources as appropriate, but will not have internal system access.
Applications are now open here and will close May 3. We will review all submissions and notify successful applicants by July 25. For any questions about the application process, please contact [email protected] (
mailto:[email protected]).via OpenAI News