Disrupting malicious uses of AI: October 2025
我们的使命是确保通用人工智能造福全人类。我们通过部署能帮助人们解决难题的创新,以及构建以常识性规则为基础、保护人们免受实际伤害的民主化人工智能,来推进这一使命。
自从我们在 2024 年 2 月开始公开威胁报告(https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/)以来,我们已中断并报告了 40 多个违反使用政策的网络。这包括阻止威权政权利用 AI 控制民众或胁迫其他国家的行为,以及诸如诈骗、恶意网络活动和隐蔽影响行动等滥用行为。
在本次更新中,我们分享了过去一季度的案例研究以及我们如何检测和阻断针对我们模型的恶意使用。我们持续观察到,威胁行为者把 AI 附加到旧有手法上以加快行动速度,而并非从我们的模型中获得新的进攻性能力。当活动违反我们的政策时,我们会封禁账号,并在适当情况下与合作伙伴分享情报。我们的公开报告、政策执行与同行协作旨在提高对滥用的认识,同时为普通用户改进防护。
● 阅读完整报告(在新窗口打开): https://cdn.openai.com/threat-intelligence-reports/7d662b68-952f-4dfd-a2f2-fe55b041cc4a/disrupting-malicious-uses-of-ai-october-2025.pdf
----------------------
Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying innovations that help people solve difficult problems and by building democratic AI grounded in common-sense rules that protect people from real harms.
Since we began our public threat reporting in February 2024, we’ve disrupted and reported over 40 networks that violated our usage policies. This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations.
In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners.Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.
● Read the full report
via OpenAI News
我们的使命是确保通用人工智能造福全人类。我们通过部署能帮助人们解决难题的创新,以及构建以常识性规则为基础、保护人们免受实际伤害的民主化人工智能,来推进这一使命。
自从我们在 2024 年 2 月开始公开威胁报告(https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/)以来,我们已中断并报告了 40 多个违反使用政策的网络。这包括阻止威权政权利用 AI 控制民众或胁迫其他国家的行为,以及诸如诈骗、恶意网络活动和隐蔽影响行动等滥用行为。
在本次更新中,我们分享了过去一季度的案例研究以及我们如何检测和阻断针对我们模型的恶意使用。我们持续观察到,威胁行为者把 AI 附加到旧有手法上以加快行动速度,而并非从我们的模型中获得新的进攻性能力。当活动违反我们的政策时,我们会封禁账号,并在适当情况下与合作伙伴分享情报。我们的公开报告、政策执行与同行协作旨在提高对滥用的认识,同时为普通用户改进防护。
● 阅读完整报告(在新窗口打开): https://cdn.openai.com/threat-intelligence-reports/7d662b68-952f-4dfd-a2f2-fe55b041cc4a/disrupting-malicious-uses-of-ai-october-2025.pdf
----------------------
Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying innovations that help people solve difficult problems and by building democratic AI grounded in common-sense rules that protect people from real harms.
Since we began our public threat reporting in February 2024, we’ve disrupted and reported over 40 networks that violated our usage policies. This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations.
In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners.Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.
● Read the full report
via OpenAI News