AI should not be a black box | 人工智能不应该是一个黑匣子 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
为了第一时间为您呈现此信息,中文内容为AI翻译,仅供参考。
FT商学院

AI should not be a black box
人工智能不应该是一个黑匣子

Spats at OpenAI highlight the need for companies to become more transparent
围绕OpenAI的争议事件凸显了公司变得更加透明的必要性。
Sam Altman, chief executive of OpenAI. Researchers once released papers on their work, but the rush for market share has ended such disclosures
OpenAI首席执行官萨姆•奥尔特曼(Sam Altman)。研究人员曾经发表过关于他们工作的论文,但由于急于抢占市场份额,这种披露已经终止
Proponents and detractors of AI tend to agree that the technology will change the world. The likes of OpenAI’s Sam Altman see a future where humanity will flourish; critics prophesy societal disruption and excessive corporate power. Which prediction comes true depends in part on foundations laid today. Yet the recent disputes at OpenAI — including the departure of its co-founder and chief scientist — suggest key AI players have become too opaque for society to set the right course.
AI的支持者和反对者普遍认为,这项技术将改变世界。OpenAI的萨姆•奥尔特曼等人预见到人类将会繁荣发展的未来;批评者则预言社会将面临混乱和过度的企业权力。哪种预测会成为现实,在一定程度上取决于今天所奠定的基础。然而,OpenAI最近的争议,包括其联合创始人和首席科学家的离职,表明关键的AI参与者已经变得对社会过于不透明,无法为其设定正确的方向。
An index developed at Stanford University finds transparency at AI leaders Google, Amazon, Meta and OpenAI falls short of what is needed. Though AI emerged through collaboration by researchers and experts across platforms, the companies have clammed up since OpenAI’s ChatGPT ushered in a commercial AI boom. Given the potential dangers of AI, these companies need to revert to their more open past.
斯坦福大学(Stanford University)开发的一个指数发现,AI领军企业谷歌(Google)、亚马逊(Amazon)、Meta和OpenAI在透明度方面存在不足。尽管AI是通过跨平台的研究人员和专家的合作而出现的,但自从OpenAI的ChatGPT引领了商业AI的繁荣以来,这些公司就闭口不谈了。鉴于AI的潜在危险,这些公司需要回归到更加开放的过去。
Transparency in AI falls into two main areas: the inputs and the models. Large language models, the foundation for generative AI such as OpenAI’s ChatGPT or Google’s Gemini, are trained by trawling the internet to analyse and learn from “data sets” that range from Reddit forums to Picasso paintings. In AI’s early days, researchers often disclosed their training data in scientific journals, allowing others to diagnose flaws by weighing the quality of inputs.
AI的透明度主要涉及两个方面:输入和模型。大型语言模型,如OpenAI的ChatGPT或谷歌的双子座(Gemini),通过在互联网上搜索并分析从Reddit论坛到毕加索绘画作品等“数据集”来进行训练。在AI的早期阶段,研究人员通常会在科学期刊上披露他们的训练数据,以便他人通过评估输入的质量来诊断缺陷。
Today, key players tend to withhold the details of their data to protect against copyright infringement suits and eke out a competitive advantage. This makes it difficult to assess the veracity of responses generated by AI. It also leaves writers, actors and other creatives without insight into whether their privacy or intellectual property has been knowingly violated.
如今,关键参与者倾向于保留其数据的细节,以防止侵犯版权的诉讼并获得竞争优势。这使得评估由人工智能生成的回应的真实性变得困难。同时,这也让作家、演员和其他创作者无法了解自己的隐私或知识产权是否被有意侵犯。
The models themselves lack transparency too. How a model interprets its inputs and generates language depends upon its design. AI firms tend to see the architecture of their model as their “secret sauce”: the ingenuity of OpenAI’s GPT-4 or Meta’s Llama pivots on the quality of its computation. AI researchers once released papers on their designs, but the rush for market share has ended such disclosures. Yet without the understanding of how a model functions, it is difficult to rate an AI’s outputs, limits and biases.
这些模型本身也缺乏透明度。一个模型如何解释其输入并生成语言取决于其设计。人工智能公司往往将其模型的架构视为其“秘密武器”:OpenAI的GPT-4或Meta的Llama的巧妙之处在于其计算的质量。人工智能研究人员曾经发布有关其设计的论文,但是为了争夺市场份额,这种披露已经结束。然而,如果不了解模型的功能如何,很难评估人工智能的输出、限制和偏见。
All this opacity makes it hard for the public and regulators to assess AI safety and guard against potential harms. That is all the more concerning as Jan Leike, who helped lead OpenAI’s efforts to steer super-powerful AI tools, claimed after leaving the company this month that its leaders had prioritised “shiny products” over safety. The company has insisted it can regulate its own product, but its new security committee will report to the very same leaders.
所有这些不透明性使得公众和监管机构难以评估人工智能的安全性并防范潜在的危害。这更加令人担忧的是,扬•雷克(Jan Leike)在本月离开公司后声称,OpenAI的领导层将“闪亮的产品”置于安全性之上,他曾协助领导OpenAI的超强人工智能工具的发展。公司坚称可以自行监管其产品,但其新的安全委员会将向同样的领导层汇报。
Governments have started to lay the foundation for AI regulation through a conference last year at Bletchley Park, President Joe Biden’s executive order on AI and the EU’s AI Act. Though welcome, these measures focus on guardrails and “safety tests”, rather than full transparency. The reality is that most AI experts are working for the companies themselves, and the technologies are developing too quickly for periodic safety tests to be sufficient. Regulators should call for model and input transparency, and experts at these companies need to collaborate with regulators.
去年在布莱切利公园(Bletchley Park)举行的一次会议、乔•拜登(Joe Biden)总统的人工智能行政命令以及欧盟的人工智能法案(AI Act),为人工智能监管奠定了基础。尽管这些措施受到欢迎,但它们更注重设置“安全防护措施”而非全面透明。事实上,大多数人工智能专家都在公司内部工作,而技术的发展速度对于定期的安全测试来说过快。监管机构应呼吁模型和输入的透明度,并且这些公司的专家需要与监管机构合作。
AI has the potential to transform the world for the better — perhaps with even more potency and speed than the internet revolution. Companies may argue that transparency requirements will slow innovation and dull their competitive edge, but the recent history of AI suggests otherwise. These technologies have advanced on the back of collaboration and shared research. Reverting to those norms would only serve to increase public trust, and allow for more rapid, but safer, innovation.
人工智能有潜力改变世界,甚至可能比互联网革命更具影响力和速度。公司可能会主张透明度要求会减缓创新并削弱他们的竞争优势,但人工智能的最近历史表明情况并非如此。这些技术是在合作和共享研究的基础上取得进展的。回归这些规范只会增加公众的信任,并允许更快速但更安全的创新。
版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

2025年关于人工智能的四项预测

尽管大型模型开发的势头可能会减弱,但仍会有其他进展。

贝莱德为何斥资120亿美元收购私人信贷机构HPS

全球最大基金管理公司试图挤入由阿波罗和黑石集团等公司主导的行业。

我们是如何对约会应用“移情别恋”的

随着女性和年轻用户转向其他地方,转向小众网站或现实生活中的约会,最大的在线约会公司正处于危机之中。

英国表现最佳的市政养老金基金背后的简单秘诀

推动肯辛通和切尔西卓越回报的银行家解释了他为何担心财政大臣的“巨额基金”。

加拿大年轻人放弃冰球,转向足球和篮球

高昂的成本和丑闻削弱了该国国民运动的吸引力。

了解您所在行业必读的一本书

专业人士会选择那些对外人很少看到的、能揭示其工作方面的书来读。
设置字号×
最小
较小
默认
较大
最大
分享×