查看原文
其他

双语全文公开信 | 暂停大型人工智能实验

生命未来研究所 David的AI全景图 2023-11-22
3月29日,生命未来研究所(Future of Life Institute)在官网发布公开信,呼吁所有的 AI 实验立即暂停研究比 GPT-4 更先进的AI模型,以保障有足够的时间为强人工智能做好准备。截至目前共1123人参与署名。
参与人包括 2018 年图灵奖得主约书亚·本吉奥、伊隆·马斯克、《人工智能:现代方法》作者Stuart Russell、苹果联合创始人史蒂夫 · 沃兹尼亚克、Skype联合创始人、Pinterest联合创始人、Stability AI CEO 等多位知名人士的签名支持,截稿前联名人数已经达到 1125 人。

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

具有人类竞争智能的人工智能系统可能对社会和人类构成深刻的风险,正如广泛的研究[1]所表明的,并得到顶级人工智能实验室的认可。[2]正如广受认可的阿西洛玛人工智能原则所述,先进的人工智能可能代表地球生命历史的深刻变化,应该以相应的谨慎和资源进行规划和管理。不幸的是,这种规划和管理水平并没有发生,尽管最近几个月,人工智能实验室陷入了一场失控的竞赛,无法开发和部署更强大的数字思维,没有人——甚至是它们的创造者——能够理解、预测或可靠地控制这些思维。

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

当代人工智能系统现在在一般任务上变得与人类竞争,[3]我们必须问自己:我们应该让机器用宣传和谎言淹没我们的信息渠道吗?我们应该自动化所有的工作,包括令人满意的工作吗?我们应该开发最终可能超过我们、智胜我们、过时并取代我们的非人类思维吗?我们应该冒失去对我们文明控制的风险吗?此类决策不得委托给未经选举产生的技术领导者。只有当我们确信它们的影响将是积极的,它们的风险将是可控的时,才应该开发强大的人工智能系统。这种信心必须有充分的理由,并随着系统潜在影响的大小而增加。OpenAI最近关于通用人工智能的声明指出“在某个时候,在开始训练未来的系统之前获得独立审查可能很重要,对于最先进的努力来说,同意限制用于创建新模型的计算的增长率也很重要。”我们同意。这一点就是现在。

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

因此,我们呼吁所有人工智能实验室立即暂停训练比GPT-4更强大的人工智能系统至少6个月。这一暂停应该是公开的、可验证的,并包括所有关键参与者。如果这样的暂停不能迅速实施,政府应该介入并暂停。

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

人工智能实验室和独立专家应该利用这一暂停,共同开发和实施一套由独立外部专家严格审计和监督的高级人工智能设计和开发共享安全协议。这些协议应该确保遵守它们的系统在排除合理怀疑的情况下是安全的。[4]这并不意味着总体上暂停人工智能开发,只是从危险的竞赛退回到具有紧急能力的越来越大的不可预测的黑匣子模型。

AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

人工智能研发应该重新专注于使当今强大、最先进的系统更加准确、安全、可解释、透明、健壮、一致、值得信赖和忠诚。

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

与此同时,人工智能开发者必须与政策制定者合作,大幅加快强大的人工智能治理体系的发展。这些至少应该包括:致力于人工智能的新的、有能力的监管机构;监督和跟踪功能强大的人工智能系统和大量计算能力;来源和水印系统,以帮助区分真实和合成,并跟踪模型泄漏;强大的审计和认证生态系统;对人工智能造成的伤害的责任;为技术人工智能安全研究提供充足的公共资金;以及资源充足的机构来应对人工智能将造成的巨大经济和政治破坏(尤其是对民主的破坏)。

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.

人类可以通过人工智能享受繁荣的未来。在成功创造了强大的人工智能系统之后,我们现在可以享受一个“人工智能夏天”,在这个夏天里,我们收获了回报,为所有人的明显利益设计这些系统,并给社会一个适应的机会。社会已经暂停了对社会有潜在灾难性影响的其他技术。[5]我们可以在这里这样做。让我们享受一个漫长的人工智能夏天,而不是毫无准备地匆忙坠入秋天。

-End-

本文由LanguageX提供自动翻译并提供双语对照,点击阅读原文可访问公开信原文。

生命未来研究所简介(维基百科):

生命未来研究所(英语:Future of Life Institute,缩写FLI)创立于2014年3月是位于美国波士顿地区的一家研究与推广机构,其致力于降低人类所面临的风险,尤其是人工智能技术开发过程中的可能风险。

生命未来研究所创立于2014年3月,创始人为麻省理工学院宇宙学家马克斯·泰格马克,Skype联合创始人让·塔林,哈佛大学博士生维多利亚·克拉科夫那(Viktoriya Krakovna),波士顿大学博士生、泰格马克之妻美雅·赤塔-泰格马克(Meia Chita-Tegmark)与加州大学圣克鲁兹分校宇宙学家安东尼·阿吉雷。研究所顾问委员会的成员则包括计算机科学家斯图尔特·罗素,生物学家乔治·丘吉,宇宙学家史蒂芬·霍金、索尔·珀尔马特,理论物理学家弗朗克·韦尔切克,企业家伊隆·马斯克,以及知名演员亚伦·艾达、摩根·弗里曼等。

【往期精选】


ChatGPT全景图 | 产品+商业篇

ChatGPT全景图 | 背景+技术篇 

ChatGPT全景图 | 竞争格局篇

机器翻译产品全景图

机器翻译什么时候跟人工翻译一样好

机器翻译前沿 | LanguageX发布面向专业译者的机器翻译

继续滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存