https://t.me/AI_News_CN
📈主流AI服务状态页通知 | 🆕汇集全网ChatGPT/AI新闻 #AI #ChatGPT
🔙备用群 https://t.me/gpt345
✨BEST AI中转 https://api.oaibest.com 2.8折起 支持OpenAI, Claude code, Gemini,Grok, Deepseek, Midjourney, 文件上传分析
Buy ads: https://telega.io/c/AI_News_CN
📈主流AI服务状态页通知 | 🆕汇集全网ChatGPT/AI新闻 #AI #ChatGPT
🔙备用群 https://t.me/gpt345
✨BEST AI中转 https://api.oaibest.com 2.8折起 支持OpenAI, Claude code, Gemini,Grok, Deepseek, Midjourney, 文件上传分析
Buy ads: https://telega.io/c/AI_News_CN
Announcing the OpenAI Safety Fellowship
Today we are announcing a call for applications to the OpenAI Safety Fellowship, a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems. The program will run from September 14, 2026 through February 5, 2027.
We are looking for applicants interested in safety questions that matter for existing and future systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others. We are especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community.
Fellows will work closely with OpenAI mentors and engage with a cohort of peers. Workspace will be available in Berkeley alongside other fellows at Constellation, though fellows may also work remotely. Fellows are expected to produce a substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship includes a monthly stipend, compute support, and ongoing mentorship.
We welcome applicants from a range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and related fields. We prioritize research ability, technical judgment, and execution over specific credentials. Letters of reference will be required.
For additional information regarding eligibility, compensation and benefits, see the application form. Fellows will receive API credits and other resources as appropriate, but will not have internal system access.
Applications are now open here and will close May 3. We will review all submissions and notify successful applicants by July 25. For any questions about the application process, please contact [email protected] (
via OpenAI News
Today we are announcing a call for applications to the OpenAI Safety Fellowship, a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems. The program will run from September 14, 2026 through February 5, 2027.
We are looking for applicants interested in safety questions that matter for existing and future systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains, among others. We are especially interested in work that is empirically grounded, technically strong, and relevant to the broader research community.
Fellows will work closely with OpenAI mentors and engage with a cohort of peers. Workspace will be available in Berkeley alongside other fellows at Constellation, though fellows may also work remotely. Fellows are expected to produce a substantial research output by the end of the program, such as a paper, benchmark, or dataset. The fellowship includes a monthly stipend, compute support, and ongoing mentorship.
We welcome applicants from a range of backgrounds, including computer science, social science, cybersecurity, privacy, HCI, and related fields. We prioritize research ability, technical judgment, and execution over specific credentials. Letters of reference will be required.
For additional information regarding eligibility, compensation and benefits, see the application form. Fellows will receive API credits and other resources as appropriate, but will not have internal system access.
Applications are now open here and will close May 3. We will review all submissions and notify successful applicants by July 25. For any questions about the application process, please contact [email protected] (
mailto:[email protected]).via OpenAI News
Degraded performance for Anthropic Models
Apr 6, 16:52 UTC
Investigating - We are investigating this issue.
via Cursor Status - Incident History
Apr 6, 16:52 UTC
Investigating - We are investigating this issue.
via Cursor Status - Incident History
“一个人加上AI,就是一间完整的公司。”曾被视为小众尝试的“一人公司(OPC)”,在AI技术的全面赋能下,正成为创业赛道的新主流。成都觅核科技创始人杨平,2020年从电子科技大学毕业后,于2023年开启了自己的创业之路,他借助AI工具紧跟网络热点开展业务。
2025年2月,杨平使用DeepSeek创作的歌曲《七天爱人》在网易云音乐上播放量超过200万次,版权收入达到五位数。他还围绕热点推出相关教程与程序,陆续获得商单收益。
今年3月,他运营的网站上线了多项新功能,目前注册用户已超过3万,日活跃用户超过20万。如今,杨平打造的网站不仅用户规模可观,还推出了“AI雇佣人类”“碳基圈”等创新功能,成为流量亮点。
其中,“碳基圈”能让用户发布动态时收到7个AI的个性化回复,并支持“仅AI可见”,既满足了表达欲,又保护了隐私。
凭借这款产品,杨平已收到多家创投机构的橄榄枝,公司天使轮估值达到3000万至5000万元。
在他看来,AI的强大让一人创业拥有了无限可能,但也伴随着被抄袭的风险。引入资本不仅是为了扩大用户规模,更是为这份创业成果加上一道“保护锁”。
via cnBeta.COM - 中文业界资讯站 (author: 稿源:快科技)
Elevated errors on Claude.ai
Apr 6, 15:45 UTC
Identified - We have identified an issue resulting in elevated errors on Claude.ai, including desktop and mobile. Users may experience errors when attempting to login, engaging with voice mode, or completing chats with Claude. We are working to resolve this issue as soon as possible.
via Claude Status - Incident History
Apr 6, 15:45 UTC
Identified - We have identified an issue resulting in elevated errors on Claude.ai, including desktop and mobile. Users may experience errors when attempting to login, engaging with voice mode, or completing chats with Claude. We are working to resolve this issue as soon as possible.
via Claude Status - Incident History