https://t.me/AI_News_CN
📈主流AI服务状态页通知 | 🆕汇集全网ChatGPT/AI新闻 #AI #ChatGPT
🆓免费AI聊天 https://free.oaibest.com
✨BEST AI中转 https://api.oaibest.com 2.8折起 支持OpenAI, Claude code, Gemini,Grok, Deepseek, Midjourney, 文件上传分析
Buy ads: https://telega.io/c/AI_News_CN
📈主流AI服务状态页通知 | 🆕汇集全网ChatGPT/AI新闻 #AI #ChatGPT
🆓免费AI聊天 https://free.oaibest.com
✨BEST AI中转 https://api.oaibest.com 2.8折起 支持OpenAI, Claude code, Gemini,Grok, Deepseek, Midjourney, 文件上传分析
Buy ads: https://telega.io/c/AI_News_CN
🤝 英伟达 CEO 重申与 OpenAI 合作关系稳定
英伟达(NVIDIA)首席执行官黄仁勋近日再次明确表示,英伟达与 OpenAI 之间不存在任何冲突或矛盾。他重申两家公司保持着正常的合作关系,以此回应近期市场关于双方关系紧张的传闻。
(财经快讯)
via 茶馆 - Telegram Channel
英伟达(NVIDIA)首席执行官黄仁勋近日再次明确表示,英伟达与 OpenAI 之间不存在任何冲突或矛盾。他重申两家公司保持着正常的合作关系,以此回应近期市场关于双方关系紧张的传闻。
(财经快讯)
via 茶馆 - Telegram Channel
黄仁勋谈AI应用与安全
英伟达(Nvidia)CEO黄仁勋在思科AI峰会上表示,企业应用AI应采取“百花齐放”后再进行筛选的策略。他将软件比作“计算器”,认为AI智能体旨在利用工具而非取代工具。出于对核心资产安全性的考虑,黄仁勋透露英伟达倾向于在本地而非云端处理内部对话数据。
马斯克整合SpaceX与xAI
埃隆·马斯克确认将SpaceX与xAI合并,旨在打造估值达万亿美元的实体。此举通过捆绑SpaceX及其星链(Starlink)业务的盈利能力,旨在增强xAI的融资能力并应对其债务压力。尽管部分投资者对治理结构表示担忧,但该集团仍计划在未来几个月内上市,以吸引关注太空与AI领域的零售投资者。
OpenAI巨额融资进展
OpenAI正进行估值超过8000亿美元的新一轮融资,并已锁定软银、英伟达和亚马逊的投资。软银投入超过400亿美元,成为其最大股东之一;英伟达计划注资约200亿美元以巩固供应关系。与此同时,为降低对单一供应商的依赖,OpenAI正积极为推理工作负载寻找英伟达GPU以外的硬件替代方案。
硬件与软件市场的两极分化
自2025年初以来,受AI基础设施需求驱动,半导体板块股价上涨约65%。相比之下,软件及服务板块下跌8%,反映出市场担心AI技术可能使传统软件产品过时。投资者目前更青睐提供底层硬件的企业,而软件公司则面临必须证明其产品不可被AI直接取代的估值压力。
(路透社)
via 茶馆 - Telegram Channel
Making AI work for everyone, everywhere: our approach to localization
OpenAI’s mission is to ensure AI benefits all of humanity, and to fulfill this mission we need to meet people where they are all over the world.
AI is increasingly recognized as critical national infrastructure, on a par with electricity. Governments and institutions around the world want to ensure their citizens and economies can benefit from the AI era by having access to the most capable systems available.
For AI to deliver on that promise, it also needs to be locally relevant. That means speaking in local languages and with local accents, respecting local laws, and reflecting cultural norms and values.
Only a small number of countries, however, are in a position to develop frontier AI models themselves. For most, the challenge is not how to build a model from scratch, but how to adapt the best available AI so it works for their specific context. This is something we consistently hear from governments around the world: they want sovereign AI they can build with us, not just systems translated into their language.
Through our OpenAI for Countries initiative, we have been exploring how localization could work in practice. The goal is to allow for localized AI systems, while still benefiting from a global, frontier-level model.
We are currently piloting a localized version of ChatGPT for students in Estonia as part of our ChatGPT Edu work, incorporating local curricula and pedagogical approaches. We are also exploring pilot localisation efforts with other countries. As part of our commitment to transparency in how AI is researched and deployed, we are sharing more detail on how localization works.
Our Model Spec is a public document that sets out how we intend our models to behave. We train our models to follow the Spec, and continuously refine it via a collaborative, whole-of-OpenAI process that incorporates what our teams are hearing from people around the world. The Spec speaks to the gamut of ways our models are used, ranging from ChatGPT, to experiences developers build on our platform, to other contexts. These rules, which apply everywhere our models are deployed, define clear boundaries on what can and cannot be changed and our commitment to be transparent about changes.
The Model Spec includes “red-line principles” that apply to all deployments, including those under the OpenAI for Countries program. In them, we emphasize that “human safety and human rights are paramount to OpenAI’s mission,” and make clear that:
● We will not allow our models to enable severe harms such as acts of violence, weapons of mass destruction, terrorism, persecution or mass surveillance.
● We will not allow our models to be used for targeted or scaled exclusion, manipulation, for undermining human autonomy, or eroding participation in civic processes.
● We are committed to safeguarding individuals’ privacy in their interactions with AI.
When OpenAI provides a first-party experience directly to consumers like ChatGPT we also commit that through it:
● People should have easy access to trustworthy safety-critical information from our models.
● Customization, personalization, and localization will not override the binding rules throughout the Model Spec. This includes the objective point-of-view principle, meaning localization may affect language or tone, but it cannot change the substance or balance of facts presented.
● People should have transparency into the important rules and reasons behind our models’ behavior, e.g., any content omitted due to legal requirements will be transparently indicated to the user in each model response, specifying the type of information removed and the rationale for its removal, without disclosing the redacted content itself. Similarly, any information added will also be transparently identified.
As we explore localized, sovereign AI through OpenAI for Countries, we are committed to keep on sharing more about what we learn, and to evolving our approach transparently.
via OpenAI News
OpenAI’s mission is to ensure AI benefits all of humanity, and to fulfill this mission we need to meet people where they are all over the world.
AI is increasingly recognized as critical national infrastructure, on a par with electricity. Governments and institutions around the world want to ensure their citizens and economies can benefit from the AI era by having access to the most capable systems available.
For AI to deliver on that promise, it also needs to be locally relevant. That means speaking in local languages and with local accents, respecting local laws, and reflecting cultural norms and values.
Only a small number of countries, however, are in a position to develop frontier AI models themselves. For most, the challenge is not how to build a model from scratch, but how to adapt the best available AI so it works for their specific context. This is something we consistently hear from governments around the world: they want sovereign AI they can build with us, not just systems translated into their language.
Through our OpenAI for Countries initiative, we have been exploring how localization could work in practice. The goal is to allow for localized AI systems, while still benefiting from a global, frontier-level model.
We are currently piloting a localized version of ChatGPT for students in Estonia as part of our ChatGPT Edu work, incorporating local curricula and pedagogical approaches. We are also exploring pilot localisation efforts with other countries. As part of our commitment to transparency in how AI is researched and deployed, we are sharing more detail on how localization works.
Our Model Spec is a public document that sets out how we intend our models to behave. We train our models to follow the Spec, and continuously refine it via a collaborative, whole-of-OpenAI process that incorporates what our teams are hearing from people around the world. The Spec speaks to the gamut of ways our models are used, ranging from ChatGPT, to experiences developers build on our platform, to other contexts. These rules, which apply everywhere our models are deployed, define clear boundaries on what can and cannot be changed and our commitment to be transparent about changes.
The Model Spec includes “red-line principles” that apply to all deployments, including those under the OpenAI for Countries program. In them, we emphasize that “human safety and human rights are paramount to OpenAI’s mission,” and make clear that:
● We will not allow our models to enable severe harms such as acts of violence, weapons of mass destruction, terrorism, persecution or mass surveillance.
● We will not allow our models to be used for targeted or scaled exclusion, manipulation, for undermining human autonomy, or eroding participation in civic processes.
● We are committed to safeguarding individuals’ privacy in their interactions with AI.
When OpenAI provides a first-party experience directly to consumers like ChatGPT we also commit that through it:
● People should have easy access to trustworthy safety-critical information from our models.
● Customization, personalization, and localization will not override the binding rules throughout the Model Spec. This includes the objective point-of-view principle, meaning localization may affect language or tone, but it cannot change the substance or balance of facts presented.
● People should have transparency into the important rules and reasons behind our models’ behavior, e.g., any content omitted due to legal requirements will be transparently indicated to the user in each model response, specifying the type of information removed and the rationale for its removal, without disclosing the redacted content itself. Similarly, any information added will also be transparently identified.
As we explore localized, sovereign AI through OpenAI for Countries, we are committed to keep on sharing more about what we learn, and to evolving our approach transparently.
via OpenAI News
Google 安卓工程副总裁 Eric Kay 证实,公司计划在 2026 年将 Quick Share 与苹果 AirDrop 的跨系统文件传输功能扩展至更多安卓设备。该功能于去年 11 月首次推出,初期仅限 Pixel 10 系列手机使用。目前,高通(Qualcomm)和 Nothing 已明确表达支持意向。由于高通芯片驱动着包括三星 Galaxy Z Fold 7 和一加 15 在内的大多数安卓旗舰机,此举将大幅改善安卓与 iPhone、iPad 及 MacBook 用户间的文件共享体验。在实际操作中,iPhone 接收端需将 AirDrop 设置为“所有人,10 分钟”方可接收文件。此外,Google 透露正与苹果合作开发新的数据迁移系统,旨在简化用户从 iOS 切换至安卓平台时的资料传输过程。
(PCMag.com)
via 茶馆 - Telegram Channel