Disrupting malicious uses of AI | February 2026
自从两年前开始发布这些威胁报告以来,我们对威胁行为者试图滥用 AI 模型的方式已有了重要认识。尤其是,本报告中的案例研究(与此前的报告一样)展示了威胁者常常将 AI 与网站、社交媒体账号等更传统的工具结合使用。威胁活动很少局限于单一平台;正如我们关于一名中国影响力运营者的报告所示,它也并不总是只依赖某一个 AI 模型。相反,威胁者可能在其行动流程的不同环节使用多种 AI 模型。我们在威胁报告中分享这些见解,旨在帮助业界及更广泛的社会更好地识别并规避此类威胁。
在此阅读全文(将在新窗口打开): https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8f79/disrupting-malicious-uses-of-ai.pdf
----------------------
In the two years since we began publishing these threat reports, we have gained important insights into the ways threat actors attempt to abuse AI models. In particular, the case studies in this report, as in our earlier reports, illustrate how threat actors typically use AI in combination with other, more traditional tools such as websites and social media accounts. Threat activity is seldom limited to one platform; as our report on a Chinese influence operator shows, it is not always limited to one AI model. Rather, threat actors may use different AI models at various points in their operational workflow. We share these insights in our threat reports so that our industry, and wider society, can be better placed to identify and avoid such threats.
Read the full report here.
via OpenAI News
自从两年前开始发布这些威胁报告以来,我们对威胁行为者试图滥用 AI 模型的方式已有了重要认识。尤其是,本报告中的案例研究(与此前的报告一样)展示了威胁者常常将 AI 与网站、社交媒体账号等更传统的工具结合使用。威胁活动很少局限于单一平台;正如我们关于一名中国影响力运营者的报告所示,它也并不总是只依赖某一个 AI 模型。相反,威胁者可能在其行动流程的不同环节使用多种 AI 模型。我们在威胁报告中分享这些见解,旨在帮助业界及更广泛的社会更好地识别并规避此类威胁。
在此阅读全文(将在新窗口打开): https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8f79/disrupting-malicious-uses-of-ai.pdf
----------------------
In the two years since we began publishing these threat reports, we have gained important insights into the ways threat actors attempt to abuse AI models. In particular, the case studies in this report, as in our earlier reports, illustrate how threat actors typically use AI in combination with other, more traditional tools such as websites and social media accounts. Threat activity is seldom limited to one platform; as our report on a Chinese influence operator shows, it is not always limited to one AI model. Rather, threat actors may use different AI models at various points in their operational workflow. We share these insights in our threat reports so that our industry, and wider society, can be better placed to identify and avoid such threats.
Read the full report here.
via OpenAI News