查看原文
其他

外滩专访 | 对话诺奖得主迈克尔·斯宾塞:关于AI发展与监管

CF40研究部 中国金融四十人论坛 2023-10-01

近几十年来,技术进步的惊人速度似乎并未显著推动全球经济增长,而以ChatGPT为代表的生成式人工智能(Artificial Intelligence, AI)技术的问世与突破,被许多人寄予为人类社会带来深刻变革的希望。

相比传统AI技术,生成式AI有哪些特点?其将如何影响整体的劳动生产率,以及如何改变劳动力市场?人工智能产业已经出现泡沫了吗?其中最值得关注的风险有哪些、如何建立治理框架?如何确保AI技术的普惠属性,避免其再次加深数字鸿沟?

围绕这些问题,在第五届外滩金融峰会前夕,我们与即将出席本届峰会并发表演讲的峰会国际顾问委员会委员、斯坦福大学商学院Philip H. Knight教授及名誉院长、诺贝尔经济学奖得主迈克尔·斯宾塞(Michael SPENCE)进行了一场深入交流。

斯宾塞认为,AI在提升更广泛领域的生产力和性能方面的潜力非常巨大。生成式AI具有两方面重要特点,一是其首次做到了仅根据简单提示就可轻松切换领域,这颠覆了人们对于AI的传统认知;二是生成式AI的使用门槛很低。由此来看,生成式AI的确具有变革性。

AI在某些职业中势必会比人类做得更好,所以一定程度的就业替代难以避免,但这并不意味着AI将取代人类。在斯宾塞看来,与饱受议论的“图灵陷阱”相比,使用强大的AI系统来增强人类的表现是更可能、也更理想的一种情况。

对于伴随而来的风险,斯宾塞列举了生成式AI在数据使用过程中所涉及的安全、隐私保护、权利归属问题,并指出生成式AI善于制造“幻觉”,导致其自身成为有力的造假和欺诈工具。此外,斯宾塞强调,要为AI发展创建一个“平衡议程”(balanced agenda)。具体而言,“人类不光需要遏制AI发展带来的负面结果和风险,还需要确保AI发展的积极结果能够在经济中充分地分散和传播,以保证未来人们不会面临不理想的竞争环境、不会出现‘死水’和AI技术无法触及之处。”斯宾塞表示。

谈及AI监管,斯宾塞提出,构建AI监管框架的难点在于两个方面,一是如何平衡监管与创新,二是国际机制如何做出统筹,尽可能协调各国努力。在他看来,当前全球趋势之一是国际机构的边缘化,如果无法改变这一点,则至少可以找到不同国家拥有重要共同利益的若干领域,例如气候变化,立足于此追求国际合作。而在所有合作领域中,AI技术都可以扮演重要角色。
“我们完全不必单纯注重AI,而是要注重以AI为组成部分的切实挑战,专注于全球共同利益而达成合作协议。”斯宾塞表示。
以下为中文访谈纪要:


1

在您看来,受人工智能技术影响最为深刻的领域都有哪些,人工智能是否会给人类社会带来“里程碑”式变革?其影响力体现在哪些方面?

Michael SPENCE人工智能的发展还处于早期阶段。AI革命始于语言和语音识别,随后发展为图像和物体识别,现在我们有了生成式AI大语言模型(Large Language Model, LLM)的惊人突破,这也是近几年才有的——生成式AI的研究始于2017年由8名学者合著的著名论文《无可或缺的注意力》(Attention is all you need)。其中,有些作者来自谷歌,不过现在他们可能也都创立了自己的公司。
包括我在内的大部分人都认为,AI在提升更广泛领域的生产力和性能方面的潜力非常巨大。这是猜测或者预测吗?在当前这个阶段,答案是确定的,因为我们还在开展成千上万的实验和探索,来明确如何使用AI。

所以,对这个问题的直接回答是:AI目前已经经历、以及未来或将经历的一系列突破可能会带来经济的变革。其终点在哪里?大语言模型的终点是Alphabet首席执行官桑达尔·皮查伊(Sundar Pichai)所说的“知识经济”。在那之前,我们还有很多事情要做。比如,我们需要在机器人领域实现突破,来扩展数字足迹。

生成式AI有几个非常有趣、非常重要的特点。第一,生成式AI首次做到了仅根据简单的提示就可轻而易举地切换领域。比如,你先对它提出关于意大利文艺复兴的问题,再切换到数学问题等,它都能对答如流。这是很不一般的、全新的,与直到最近生成式AI问世之前我们对AI的传统认知——即“AI在有限的领域内表现最好”——背道而驰。第二,生成式AI的使用门槛很低,尽管创造它需要大量的技术训练,但用户不需要太多技术训练就能上手。
所以,是的,我认为生成式AI是具有变革性的,而我们尚处在监管实验的早期阶段。


2

人工智能技术将如何影响整体的劳动生产率,以及如何改变劳动力市场?

Michael SPENCE:对于部分工作来说,尤其是在知识经济中,AI系统肯定会比人做得更好。看待这个问题的一个角度是,什么情况下AI胜过人类?什么情况下人类胜过AI?总有些事情是AI做得更快或更准确的,所以一定程度的就业替代肯定会发生。这是一定程度的自动化。
整体来看,在可预见的未来,AI可能性最大的应用方式是——按我的说法——“强有力的数字系统模型”(Digital System Model),所以在相应领域,它会取代部分就业,但不会完全将人类排除在外。

以具体应用为例,我们会完全淘汰掉资产管理分析师而转向AI吗?我很怀疑。医生和护士会不会使用AI来辅助工作?可能会有;那我们会淘汰掉这些职业本身吗?不会。

斯坦福大学教授Erik Brynjolfsson在其发表的一篇颇具影响力的论文中介绍,AI会导致偏见,他将其称之为“图灵陷阱”(the Turing Trap),即人类在AI开发中倾向于通过自动化来替代和超越人类能力(而不是对人类能力形成补充)。

其观点之中肯定有一些部分是不可否认的,但我认为,比起替代人类,使用强大的AI系统来增强人类的表现是更可能、也更理想的一种情况。我希望我们朝着这个方向发展。坦白说,我不太担心大规模失业,AI发展更可能的结果是巨大的生产率增长。与此相关的是,可能会出现暂时性就业的问题。

但这只是宏观层面的影响。在微观层面,使用数字工具会给就业带来很大颠覆。生成式AI大语言模型可以生成许多东西的初稿,例如,原本医生要花大量时间写工作报告,现在生成式AI提供的相对精确的初稿,可以帮助节省80%的时间,这是纯粹的收益。但这是否意味着我们不再需要那么多医生了?我对此表示怀疑。不过,对于媒体通讯行业的撰稿人,我能预想到AI会对其产生显著的就业影响。

不过,尽管现在为此下结论为时尚早,但我们也不能假设说上述此类的微观层面的冲击不会发生。再举例来说,生成式AI模型可以撰写计算机代码的“初稿”,这已经得到了演示验证。这会提高软件工程师编程的效率。如果其影响很大,或许我们确实不需要那么多编程人员了,但我们恰恰生活在一个“软件驱动一切”的时代,所以对软件的需求也会大幅增长。我们很难判断两者相抵后会产生怎样的净效应。

所以,你能判断这些影响中哪些更为显著吗?我认为不能,还要继续观望。我个人的观点是,人类社会不太可能仅仅因为AI已经显示出的强悍能力就发生大规模的就业替代。
如果再过25年,AI经过了测试、运行,我们就更难以预测未来世界会是什么样子、人类和数字化机器之间的关系又会是怎样的。不过我相信,未来十年左右的时间里,我们会看到AI真正的潜能被释放出来,这十年间,我们很可能会看到机器与人类之间展开高效的协作。


3

有观点认为,人工智能产业已经出现泡沫,您对此有何看法?

Michael SPENCE:问题的答案取决于“泡沫”的具体含义。如果泡沫是指市场脱离现实,那我认为目前AI不存在这个问题;如果泡沫是指市场估值过高、甚至远高于当前实际情况,我认为这是有可能的。
历史上我们也经历过泡沫,比如20多年前的互联网泡沫——当时,人们意识到了数字技术是非常强大的,能够给经济的诸多方面带来转型。但在早期阶段,一些新创建的技术公司并不成功,最终倒闭;估值虚高的情况也有许多,最终也都回落了。

但你要是问,那时候人们对于数字技术(如电商、金融科技等)将对经济产生深远影响的预测——尽管预测可能过度乐观——是否正确?其实这个预测的结果并没错,只是预测的时间不对。技术自我实现的时间总是长于预期,我们认识到技术的潜力和潜力真正实现之间往往有很长的时间差。

我认为我们现在也是同样的情况。对于AI技术将对经济乃至其他方面产生巨大影响的预测是正确的。AI会不会产生风险和负面影响?会。明天就会发生吗?不会。与人们猜想的情况相比,这些事情成为现实的时间可能会晚得多。商业模式必然会变化,组织形式会变化,人类行为也需要变化,而且人类也会在变化之中习得新的技能。而以上全部内容,都不是在短时间内就会发生的事情。
所以,我认为现在确实存在估值过高的问题,但这背后的原因是市场预期我们正在经历一些根本性的转变。


4

您认为生成式AI最值得关注的风险有哪些、如何尽可能抑制这些风险?

Michael SPENCE:生成式AI会产生很多风险,我甚至不知从何说起。
首先,是一系列新的数据问题。这其中当然包括数据使用涉及的安全、隐私保护、权利归属问题等,但这些不是新的问题。新的问题在于生成式AI大语言模型所使用的训练数据来自整个互联网上几乎所有以数字形式存在的内容。这些内容是否涉及数据确权问题?还是说生成式AI大语言模型可以随意使用互联网上的一切内容?这一问题需要仔细思考。

第二,生成式AI是一种相对有力的造假和欺诈工具,比如,它可被用于制造假新闻、混淆公众视听。这会产生实际的风险,也由此涉及到监管问题。

另外还有一些相对个别的风险,我不会太担心。比如,美国有一个著名的案例,涉及到生成式AI所带来的“幻觉“风险:一位律师使用生成式AI起草了一份诉状,并将其呈递给法庭。不幸的是,这份诉状引用的判例是虚构的、并不实际存在的。在AI世界中,我们把这叫做“幻觉”(hallucinations)。这位律师对此毫不知情,却由此陷入很大麻烦,因为他告诉法庭有大量的判例和法律案件可以作为支持——但实际上这些都不存在。生成式AI确实有可能制造“幻觉”,用户对此要小心。创建这些生成式AI工具的人对此显然有着充分的认识。

很多时候,事实准确性是很重要的——这种情况下,“幻觉”确实是个问题。不过对于一些创造性的行业,编造内容可能也没那么差劲。例如,写一首新歌、画一幅画、制作一个新的视频等,这其中自然也涉及到版权归属问题,但也能为艺术家提供新的灵感。生成式AI能提供大量的反馈,从而达到这个效果。在有些情况下,这些“幻觉”甚至是好的而不是坏的。新的思想能创造新的事物。

但我必须要再说一点。如果我们要为AI发展创建一个“平衡议程”(balanced agenda),就要注重两点。第一是AI技术带来的一系列风险和潜在的滥用问题;第二点,也是我认为尚未得到足够重视的一点是,我们确实出台了很多政策,以保障AI这一强大的新技术在各类积极的经济应用场景中的可及性和可用性,我们姑且将其理解为是为了确保AI能够提振生产力,这时候,中国、美国、也可能包括欧洲的大科技公司和大银行就有资源、基于其创建的应用程序接口(API)去使用AI模型探索并建设相关的用例和应用。但中小企业呢?他们能否适应?仅靠市场机制能保证中小企业的技术发展吗?

我不认为这是个明智的假设。在过去几轮数字技术的普及过程中,我们看到了不同行业、不同企业之间会呈现差异,技术行业、金融行业在数字技术应用中会处于领先地位,至少在美国是这样,而有一些行业则严重落后。AI技术的推广过程中可能也会出现这种差异。

我认为,所谓“平衡议程”还应该包括一点,这一点不是如何遏制负面结果和风险,而是如何确保积极的结果能够在经济中充分地分散和传播,以保证我们不会面临不理想的竞争环境、不会出现“死水”和AI技术无法触及之处。因此,我希望我们从现在开始就重新平衡AI议程,使其能够促进AI技术的扩散和广泛应用。

我们迫切需要生产率增长——至少西方国家迫切需要,因为我们的经济增速在下降,也面临诸多供给侧挑战。如果能实现生产率的迅速增长,就能显著缓解经济增长面临的供给侧局限;而如果无法实现生产率增长,我们就很难实现许多其他目标,例如为推进能源转型和应对气候变化进行大规模投资。

事实上,我们已经面临着主权债务攀升、利率持续走高、财政空间收缩等种种困境,而现在每年还要为能源转型筹集三到四万亿美元的资金。提振生产力和经济增速将极大地提高我们实现其他关键目标——即可持续和包容性发展——的能力。

所以,为了实现这个目标,我们不能依靠于“只有技术领导者应用AI技术、而把其他人都甩在后面”的模式,我们需要的是整个系统的改变。生成式AI有潜力为经济中的每一个人所用,这就是“平衡议程”的另一面。这与监管无关,更多的是推广,借助公共部门的支持,推广信息技术、开展技能培训,从而实现我们想要的结果。


5

构建AI监管框架的难点有哪些?

Michael SPENCE:这取决于具体的地区,不同国家和地区之间差别很大。欧洲倾向于尽可能不出现AI领域内的主导性企业,也在相对激进地开展AI监管,但欧洲相对于中国和美国来说就缺乏能够保障监管不妨碍创新应用、实验与探索的激励机制。
另一方面,各国、各地区也有共同点。我们都要解决数据问题。从开源数据的角度看,我们是否需要对生成式AI的应用进行一定的限制?我认为我们还没找到答案。此外,我也不认为我们最终得到的答案会是一样的,因为各国的政治体系、文化都不尽相同,例如,中国政府和美国政府在两国数字化发展中所扮演的角色就存在原则性不同。

这里涉及两个难题,一是如何平衡监管与创新,或者如何平衡监管与所有社会所重视的其他价值。二是从国际角度来看,我们需要国际机制做出统筹,尽可能协调各国努力。

平衡监管与创新,这是个很难的问题。我不认为AI和国家安全无关,这意味着AI发展几乎必然会面临限制。

我们不可避免地会看到对技术与产品的流动施加的限制。我认为挑战在于,如何以合作的方式尽可能缩小限制性措施的范围,但我觉得大部分国家可能都不愿意尝试这么做。

我不认为任何一个理智的人会真的以国家安全为由,寻求全球范围内大规模的贸易流通与技术转移的停滞。划定出真正构成安全威胁的技术确实很难,但值得努力。

国际方面,我们已经开始使用AI来增强全球供应链的透明性。全球供应链是非常复杂的,仅靠人力不可能搞清楚所有正在发生的事情,而AI则有用武之地。这个过程已经开始了,但这就意味着,AI将在依赖于数据的国际贸易与商业环境中运作。我们必然需要制定原则和监管架构来明确这方面的边界。

讨论这一问题的出发点在于,我们还处在AI发展的早期阶段。我们无法准确预测AI将如何以及会以怎样的顺序对经济产生冲击;另一方面,我们也无法准确猜测目前正在探索中的AI监管最终将会如何演变。对此,人们有着大量不同的观点,有些人甚至认为AI对人类的存在构成威胁,但我不认为这是主流观点。
正因为AI是如此新兴的事物,所有人在某种程度上都在进行关于AI风险与机遇的自我教育。我无法想象短期内我们能简单地解决这些问题。我们的目标在于对此保持开放的态度、维持适当的平衡。


6

如何建立起协调、统一、有效的跨国人工智能治理架构?

Michael SPENCE:在由民族主义支配的国际环境下,这是不可行的,所以我们需要发挥国际机构的作用。随着数字化的重要性日益凸显,我们需要现有的、甚至新的国际机构来协调和管理各国互动。我们需要国际机构来发挥平台作用,协调各国进行探讨,并确保相关讨论是包容性的。
我们现在正在经历的趋势之一是国际机构的边缘化,这是地缘政治冲突和民族主义兴起所产生的副作用之一。这种趋势是适得其反的,对所有人都不利。在当前的时代,我们需要一个能够探讨合作的平台。

如果无法改变这一点,另一种办法是,我们至少可以找到不同国家有重要共同利益的几个领域——其中大家说的最多、我也认同的是气候变化。我们可以致力于保证不破坏能源转型和全球经济可持续发展所需的技术转型及其推广。我们可以专注于共同利益,求同存异,这将是一个巨大进步。
而在上述所有领域中,AI技术都可以扮演重要角色。我们完全不必单纯注重AI,而是要注重以AI为组成部分的切实挑战,专注于全球共同利益而达成合作协议。其中一个共同挑战就是,为了实现共同目标,我们真正需要向全世界传播什么样的AI技术?


7

放眼全球,如何促进以人工智能为代表的本轮科技创新向包容性增长的方向发展,而非仅仅让少数国家或人群变得更有数字优势?

Michael SPENCE:这是个很重要的挑战,这是“平衡议程”的一部分。我们一方面要注重创造广泛的利益,另一方面也要保证技术具有广泛的可及性。生成式AI技术本身没有问题,它和以往的技术不一样,具有广泛可及性,普通人也能用,学习如何输入提示是很简单的事情。
但放眼全球,我们确实需要一个可行的计划,防止出现大国一骑绝尘、其他国家望尘莫及的情况。这个假设有点极端,因为我们也看到很多新兴经济体有大量的AI创业和创新活动,包括欧盟也是,只不过欧盟在尖端科技领域明显落后于中美两国。只要不出现新的壁垒,我相信这些国家是会有发展的。

我们需要的是切实的议程,让当前的全球增长模式更具包容性。更脆弱的新兴市场经济体面临债务困境,可能会出现债务违约、需要重组,新冠疫情耗尽了它们的财政空间,这些国家还面临着极难应对的气候冲击——气候冲击对于所有人来说都是难题,但对于资源本就有限、还在不断消耗的国家来说尤甚——我们已知的气候冲击已经多到记不住了。例如,据估计,利比亚东北部地区因大坝损毁引发的水灾而死亡的人数可能会超过2万。

AI并不与此直接相关,但我认为全球经济中曾多年以来一直促进包容性增长的许多前进动力已经不复存在,例如新兴经济体的迅速增长、中产阶级的崛起等。并非所有国家都是如此,有不少国家是脆弱的,面临被别的国家甩在身后的风险。这就与AI息息相关。

在我们的经济中,机器正在承担更多任务,不管是生产制造业还是大众服务业。这些脆弱的经济体需要探索新的增长模式。随着数字经济的发展,过去驱动包括中国在内的很多国家发展的劳动密集型制造业、装配业至少在短期内将无法继续发挥作用。因此,我们需要AI技术广泛传播,促进其在整个全球经济中的应用。但据我所知,没有任何一个国家把这作为最重要的事情之一来做,希望我们有一天能做到。


8

本届外滩金融峰会围绕科技创新与生产力革新设置多场讨论,您最关注其中哪个环节或哪些议题?您对外滩金融峰会的未来有何期待?

Michael SPENCE:我很期待我会发表演讲的环节,即以“应对新冠危机:中美欧宏观政策回顾与展望”为主题的CF40-Euro50经济学家学术交流会暨外滩闭门会。我很期待听取其他人的想法。
我也看到本届外滩峰会还设置了关于技术转型、金融科技等议题的环节,我对这些环节也很期待。除此以外,有趣的环节还有很多。

我对外滩金融峰会的期待一直在于,外滩金融峰会能够邀请有着丰富经验和专业知识的人齐聚一堂、分享观点,这种思想碰撞所带来的是我们能够以新的方式看待世界,可能也会找到我们想要追求的新的目标,不仅对我们自身和我们所处的机构有益,甚至还能为更宏大的事业做出贡献。

不幸的是,疫情影响和地缘政治冲突极大地遏制了国际交流与互动,但这恰恰是非常重要的。我们需要分享思想、了解彼此。外滩金融峰会是一个重要的促进交流的平台,我也非常高兴参与其中。

英文访谈纪要

Q1:In your views, what are the areas that have been most profoundly reshaped by AI technologies? Will AI bring the human society through a “milestone” revolution? 

Michael SPENCE:AI is in early stages. The AI revolution goes back to language and speech recognition and then image and object recognition, and now we have the amazing breakthrough of the large language generative AI models. That’s very recent – that research started in 2017 with that famous paper that the eight authors wrote. Some of them were from Google, and now they may all have their own companies.

Most people, including me, think that the potential in terms of productivity and improved performance, if you want, at a slighter broader front, in a very, very wide array of areas, is very high. Now, is that a guess, or a forecast? Yes, at this stage it is, because there’s literally thousands and thousands of experiments and explorations being conducted to find out how to use these things.

The straight answer to the question is that the sequence of AI breakthroughs and likely future ones could result in a transformative change in the economy. In terms of where it lands? The large language models basically land in what Sundar Pichai, the CEO of Alphabet, said as the knowledge economy. There’s more to do. We need breakthroughs in robotics in order to expand, yet once again, the digital footprint.

But I think it’s a huge thing that generative AI has a couple of really interesting characteristics. One is that, really for the first time, the AI switches domains easily in response to very simple kinds of prompts. If you ask it about the Italian Renaissance and then switch to mathematics and so on, it goes with you. That’s very unusual and new. It runs counter to the conventional wisdom in artificial intelligence until recently up to that point – that AIs perform best in restricted domains. The second thing is it’s accessible. You don’t need much technical training in order to use it. You do need technical training to create it.

So, yes, I think it’s potentially revolutionary and we are in the regulatory experimental early stages.

Q2: How will AI technologies impact the overall productivity and the labor market?

Michael SPENCE:There will certainly be aspects of work, particularly in the knowledge economy, in which the artificial intelligence systems will do a better job. One way to think about this is when does the AI outperform the human, and when does the human outperform the AI. So, there’s a set of things that AIs do either faster, or more accurately, and so there will be some displacements. That is some degree of automation.

Overall, at least for the foreseeable future, the most likely use of AIs is something that I call the powerful digital system model, so that takes part of the job, but it doesn’t take human out of the equation. When you think of the applications of it, are we going to get rid of analysts for asset management, I doubt it, and turn in over to AIs? Are nurses and doctors going to have AIs that assist them in their work? Yes, probably. Are we going to get rid of them? No.

I think it’s right to admit Erik Brynjolfsson at Stanford wrote an influential essay that I referenced saying there is a bit of a bias that he called the Turing Trap, which is the lean in the direction of automation and replacing human activity. While it would be crazy to deny some of that, I think it is more likely and better to focus on augmenting human performance using these powerful systems - I hope that’s where we go. I’m not terribly worried about a massive loss of employment opportunity, frankly. The more likely outcome is a kind of large productivity gain. Maybe there will be transitory employment problems associated with that.

Having said that, that’s kind of at the macro level. At the micro level, there will be a lot of change in jobs using these digital tools, and so there could be a fair amount of disruption. AI’s generative LLMs are going to write first drafts of stuff. If you think about a doctor who spends enormous amount of time recording what he or she has done, having a first draft that’s reasonably accurate can reduce that time by 80% to get the report done. That’s just pure gain. Do we need fewer doctors? I doubt it. For people who write copy for media and communications, I can imagine there will be a significant employment affect.

Again, it’s too early to tell. I think it would be wrong to assume that there will be no micro impacts of that kind. Let me give another example. These models are capable of producing “drafts” of computer code. That’s been demonstrated. That’ll make software engineers writing computer codes more efficient. If it was a big effect, maybe we need fewer programmers. But we live in the age where software is going to drive everything, so there will be huge increment demand as well. The net effect is hard to know in advance.

So, the question is, sitting in an armchair, can you figure out which of those effects is bigger? I don’t think so. There’s a lot to wait and see. My personal view is that it’s not likely that you’ll see massive loss of employment opportunities just because AIs have demonstrated themselves to be reasonably capable.

Now, if you go on 25 years, and these things have been tested and run, I think it gets a lot harder to guess what the world would look like and what the relationship between humans and digital machines is going to be. So, I cannot go there, but I think of the implementation of the true potential of this occurring over the next 10 years or so, and in that 10 years, we’re going to see mostly a kind of productive collaboration between machine and humans.

Q3: There is the view that AI is already in the middle of a bubble. Do you share that concern?

Michael SPENCE:I’m not too worried about it. The question is what do you mean by “bubble”. Some people think a bubble is when the market gets disconnected from reality and there is really nothing there, right? I don’t think that’s the case for AI. Are the markets pretty frothy in the sense of valuations? Probably yes. So, if by a bubble you mean they’ve overshot the mark relative to the current underlying reality, you could make a good case for that.

We have precedent for this, in the internet bubble more than 20 years ago. A that time, people realized that these digital technologies were pretty powerful and would transform aspects of the economy. But in the early stages there were companies created that didn’t make any sense that eventually failed; there were valuations that didn’t make any sense and they eventually came down. But if what you ask about that forecast, even though it was an excessively exuberant one, that this was going to have a profound effect on our economy, on e-commerce, fintech, etc, the forecast wasn’t wrong – it’s just the timing wasn’t quite right. It always takes longer than the expected. Recognizing the potential and seeing it realized and implemented occur on different time scales.

I think we are in the same situation now. I don’t think the forecast that this is going to be very impactful on the economy and more broadly than that is wrong. Is it going to have some potentially risky downside effects? Yes. Is it going to happen tomorrow? No. It takes way too long relative to the initial speculation for these things to come into reality. Business models have to change, organizations change, people need to change their behavior and acquire new skills – all that sort of thing is not something that will happen in a single quarter.

So, I think the valuations are pretty high right now, but the underlying reality is that they are anticipating that something pretty fundamental is happening.

Q4: What are the most notable risks that generative AI creates, and how to bring them under control? 

Michael SPENCE:There’s a large set of risks. I don’t know where to start.

First of all, there’s a new set of data issues related to security and privacy and whose rights are recognized in the way data is used. And that’s not new. What’s new is that generative AI, the big models, are trained on essentially the entire Internet. They are trained on virtually everything that’s out there in digital form. So that raises the question of whether there are any rights associated with that or do the LLM’s have a free run on the internet with everything that has been published. That needs to be thought through.

Second, generative AI is a fairly powerful tool for creating fake news and fraud, that kind of thing, influencing people. So, there’s regulatory issues with respect to that and there’s real risks associated with that.

Then there are idiosyncratic ones that I don’t worry so much about. There is a well-known case in the United States involving hallucinations. A lawyer used generative AI to produce a brief that was presented to the court, and the generative AI, unfortunately for this lawyer, cited legal precedents that it has made up – that is they don’t exist. These are called hallucinations in the AI world. And the lawyer didn’t know that and presented it to the court and he got into serious trouble, because he tells the court there’s a whole bunch of precedents and legal cases that actually don’t exist. They’ll produce hallucinations and one has to be kind of careful. The creators of these generative AI models are well aware obviously.

Factual accuracy is important in many contexts – there hallucinations are a problem. But in creative industries, making stuff up isn’t so bad. If it makes up a new song or a new picture or set a new video, there’s attribution issues associated with this for sure, but it might just give a creative artist new ideas. There’s a fair amount of feedback to that effect. In some contexts, these hallucinations are actually good, rather than bad. New ideas create new things.

There is one other thing I might mention, though. A balanced agenda with respect to AI needs to focus on two things. One is a set of risks and potential misuses of the technology, the second one, which I’m afraid is getting too little attention, is that there is a set of policies that are designed to make this powerful new technology accessible and usable in its positive applications, let’s call it productivity for a moment, across the entire economy. So, big tech companies and big banks in China and the United States, maybe in Europe, will have the resources to explore building use cases and applications on top of the AI models using the application programming interfaces (APIs) they are creating. But what about the small- and medium-sized businesses? Are they going to be fine? Is the market system going get the job done by itself?

In don’t think it is a sage assumption. In past rounds of digital technology penetration, we’ve seen a pattern of divergence both across sectors and companies. The tech sector and the finance sector tend to be pretty advanced, at least in the United States, while some other sectors are seriously lagging behind. That divergence could emerge here.

For me, the other part of a balanced agenda is associated not with protecting us from negative outcomes and risks, but with making sure that the positive outcomes are dispersed and diffuse widely in the economy so that we don’t get unfortunate competitive outcomes, backwaters, places where it doesn’t penetrate. So, I’m hoping that we can rebalance that agenda in the direction of diffusion and widespread adoption.

If you really want the surge in productivity which we desperately need, (at least in the west we need it because we have declining productivity, declining growth and all kinds of supply side headwinds) and so a productivity surge, if it can be achieved, would be a major change in the supply side constraints to growth. And if we don’t get that, it would be much more difficult to achieve other objectives such as massive investments in the energy transition and climate change. If you think about it, we have rising sovereign debt levels, rising interest rates, declining fiscal space, and then now we are supposed to spend an extra of 3 to 4 trillion dollars a year on the energy transition…anyway, it’s fairly easy to see that productivity and growth would be a major boost in addressing other crucial objectives: sustainability and inclusiveness.

So, in order to achieve that, you really need this technology not to be adopted only by the tech leaders while everybody else is behind, you need the whole system. And generative AI is potentially applicable essentially to everyone in the economy. So that would be the other part of the agenda. It’s not regulatory. It’s more promotion, public sector support, diffusion of information technology, skills training, etc, that would help ensure that kind of outcome.

Q5: What are the major bottlenecks in building an AI regulatory framework?

Michael SPENCE:It depends on where you are. There are really big differences. Europe, which tends not to have the main players that are generating this technology, is moving fairly aggressively to regulate artificial intelligence. What they don’t have is the same kind of incentive that you have in China and the United States to make sure that we don’t get in the way of innovative uses, experimentation, exploration and all that sort of thing.

On the other hand, there are common elements. The data question has to be addressed. Are there limits that need to be placed on the use of generative AI with respect to publicly available data? I don’t think the answers to this have emerged yet. Furthermore, I don’t think the answers that will emerge, when they do, in different places will be the same. Because the political systema and the cultures are different as you go from one place to the other. So, the role of government in China with respect to digital is very different in principle than the role of government in places like the United States.

So, there are two issues there. One is finding the balance between regulation and innovation and whatever other values deemed important in the society. And then in the international arena, we need international institutions to mediate the process, trying to make these things match together as much as we can – and this can be very difficult.

Balancing security and innovation is a hard problem. I don’t think anybody would argue that AI doesn’t have anything to do with national security, regardless of whose national security you are talking about, which means that it’s almost inevitable that there are going to be restrictions.

Inevitably, we are going to see restrictions on the flow of technology and products that support it. I think the challenge is to limit those restrictions cooperatively to the extent we can. I suspect that’s the approach that most governments will try take.

I don’t think sensible people realistically want to use national security as an excuse for a massive shutdown in trade and technology transfer on a global basis. While it’s not an easy challenge to ring-fence the technologies that are essential to national security, it is worth the effort. 

In the international arena, let me give an example. Artificial intelligence is starting to be used to increase the transparency of global supply chains. They are very complex. It’s almost impossible for humans to figure out all the things that are going on, but the AIs have a reasonable contribution to make in that area. And that process is starting. But that means AIs are going to be operating on what is quintessentially a data-dependent international trade and commerce. It’s almost surely true that we are going to need principles and regulatory structures that define the boundaries for that. That’s just one example.

So, the bottom line is that we are in early enough stages. We don’t have a precise forecast of how and in what sequence the AIs are going to hit the economy. We don’t really, on the other side, have a very precise guess as to how the regulatory process that are underway will turn out. And the range of opinions is quite large. Some people think these AIs are an existential threat to humanity; but I don’t think that’s the majority view. And because it’s so new, everybody in some sense, all of us, are educating ourselves about both risks and opportunities. You can’t imagine there is some kind of short-run easy resolution to this at all. The goal is to maintain an open mind and a sense of balance.

Q6: How can countries establish a coordinated, unified and effective transnational governance framework?

Michael SPENCE:This is not going to be doable in an environment dominated by nationalism. So, we need the international institutions; maybe we even need new ones. As digital becomes even more important, we need existing and possibly new international institutions to intermediate and manage the interactions.

We need the international institutions to be the forum and to mediate the discussion and make sure that is the scope is inclusive.

The current trend toward marginalization of the international institutions as a side effect of geopolitical tensions and rising nationalism is counterproductive. That’s a negative for everybody. This is an area where you need a forum where you explore options with respect to how to cooperate.

Another way to go about it is to say, look, there’s not much we can do about this, it’s just part of the scenery. But one thing we can agree on is that we have a few areas where we have important common interests. The one that is most cited, correctly, is the climate challenge. So, another way to go about this is to say what do we need to do to make sure we don’t disrupt the technology transformation and its diffusion, that are needed to accomplish the energy transition and move to a sustainable global economy. We could focus on that, and just agree to disagree on other things. That would be a major step forward.

And AI plays a role in all of these things. The way to go about this may not be to focus entirely just on AI, but to focus on real challenges in which AI is a component and let the cooperative agreements emerge from focusing on common problems including what kinds of AI do we really need to spread around the world in order to achieve the objectives that we all share.

Q7: How to make sure that the current round of technological boom represented by AI supports inclusive growth across the world, as opposed to making a minority group of countries or people more digitally privileged?

Michael SPENCE:That’s an important challenge. It’s the part of the balanced agenda. It’s to focus on widespread benefits as well as widespread accessibility and use. The technology itself is not necessarily problematic. Because it’s different, it’s accessible, ordinary people can use it. It’s not that hard to learn to inject prompts.

But we do need, if you look at it from the global point of view, some kind of a plausible plan that prevents an outcome like the great powers race forward and everybody else is standing there looking from the sidelines. That’s a bit too extreme, because there are a lot of entrepreneurs and innovative activity in a wide range of emerging economies, for example, and even in Europe which is clearly behind both China and the United States in terms of advanced technology. They’ll bring some of this unless we introduce extraordinary and currently non-existent barriers.

But there is a real agenda here, and it’s part of the agenda restoring inclusiveness to the global growth patterns right now. The more vulnerable emerging economies have debt distress, potential defaults and restructuring needs, they have limited fiscal space because the pandemic used it all up, their climate shocks are very difficult to deal with – they are difficult for everybody to deal with but especially with limited and shrinking resources. The list of climate shocks that have gotten attention is so long now that it’s hard to remember them all. Just south of us, in northeastern Lybia, the current estimate is that more than 20,000 people died in floods when dams broke.

This isn’t directly related to AI, but I think there is a pattern that we’ve lost the forward momentum that we had for many years in the global economy in terms of inclusiveness – the rapid growth of emerging economies, the rising middle class and so on. Not everywhere. Now there is a set of countries that are vulnerable and at risk of being left behind. And AI is part of that story. We are moving to economies in which machines do more and more things, whether it’s production or the massive service sectors. These economies need new growth models. The labor-intensive production and assembly that drove growth in many countries including China, at least for a while, may not work as the digital economy grows. And so, we need this technology to be spread and get adapted in the entire global economy, but as far as I know that’s not high on anyone’s agenda at the moment. Hopefully we’ll get there.

Q8: The 5th Bund Summit features multiple sessions themed around technological innovation and productivity revolution. Which session or what topics are of your utmost concern? What are your expectations for the event going forward? What are your expectations for the event going forward, especially in view that you sit on our International Advisory Council?

Michael SPENCE:I’m looking forward to the session I’m in, which is focused on comparing the macroeconomic conditions and responses in the major parts of the global economy which should be interesting. I look forward to listening to the other people.

But I think these multiple sessions on technology transformation, in fintech and finance in general, for example, tends to be an important focus in the Bund Summit. So, I’m looking forward to those, but there are just so many interesting sessions.

My expectations are always the same. If you get people with experience and relevant knowledge together, and they share their ideas, the main output of that is everybody comes away with a new way of seeing the world and a new set of things that they might want to pursue that are beneficial not only to themselves and their organizations and to a larger cause, if I can put it that way.

Unfortunately, the combination of the pandemic and the geopolitical tensions have reduced the amount of interactions that we have dramatically. Those interactions are really important. We need to share ideas and know what’s going on in each other’s backyard. So, the Bund Summit is an important event and organization that promotes that, and I’m glad to be part of it.

在将于本周启幕的第五届外滩金融峰会上,Michael SPENCE将出席9月22日的CF40-Euro50经济学家学术交流会暨外滩闭门会“应对新冠危机:中美欧宏观政策回顾与展望”,并在以“2020-2022中美欧宏观政策:效果与影响”为主题的专题研讨一环节发表演讲。

聚焦Michael SPENCE尤为关注的科技创新、技术转型、金融科技等话题,本届外滩峰会设置全体大会、外滩圆桌、外滩闭门会等多个环节,议题覆盖“赋能实体经济与共建科创金融”“窥见未来:新技术打开新世界”“金融支持科创发展与技术驱动下的金融创新”等。在以“前沿科技对金融发展及监管的影响”为主题的外滩圆桌中,与会嘉宾将围绕AI等新技术如何对经济金融产生影响以及如何提升监管能力展开思想碰撞。

第五届外滩金融峰会由中国金融四十人论坛(CF40)与中国国际经济交流中心(CCIEE)联合主办,主题为“迈向新征程的中国与世界:复苏与挑战”。峰会持续聚焦绿色发展、国际金融、资产管理、金融科技四大主题,坚持“国际化”、“专业化”定位,为上海建成具有全球重要影响力的国际金融中心,为中国作为建设性力量参与国际治理,为国际社会消弭分歧、增进互信、凝聚共识,贡献价值与力量。

关注中国金融四十人论坛公众号和外滩金融峰会官网,第一时间获取第五届外滩金融峰会更多亮点和完整议程。

↓直播预约↓

版面编辑:马欣雨|责任编辑:瑟瑟 李俊虎

撰文:宥朗 瑟瑟|翻译:佳茜

视觉:李盼 东子

监制李俊虎 潘潘

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存