Loading...

吴恩达来信:AI的民主化

大模型2年前 (2023)发布 智源社区
586 0 0

?智源社区日报关注订阅?

Dear friends,

AI risks are in the air — from speculation that AI, decades or centuries from now, could bring about human extinction to ongoing problems like bias and fairness. While it’s critically important not to let hypothetical scenarios distract us from addressing realistic issues, I’d like to talk about a long-term risk that I think is realistic and has received little attention: If AI becomes cheaper and better than many people at doing most of the work they can do, swaths of humanity will no longer contribute economic value. I worry that this could lead to a dimming of human rights.

We’ve already seen that countries where many people contribute little economic value have some of the worst records of upholding fundamental human rights like free expression, education, privacy, and freedom from mistreatment by authorities. The resource curse is the observation that countries with ample natural resources, such as fossil fuels, can become less democratic than otherwise similar countries that have fewer natural resources. According to the World Bank,“developing countries face substantially higher risks of violent conflict and poor governance if [they are] highly dependent on primary commodities.”

A ruler (perhaps dictator) of an oil-rich country, for instance, can hire foreign contractors to extract the oil, sell it, and use the funds to hire security forces to stay in power. Consequently, most of the local population wouldn’t generate much economic value, and the ruler would have little incentive to make sure the population thrived through education, safety, and civil rights.

What would happen if, a few decades from now, AI systems reach a level of intelligence that disempowers large swaths of people from contributing much economic value? I worry that, if many people become unimportant to the economy, and if relatively few people have access to AI systems that could generate economic value, the incentive to take care of people — particularly in less democratic countries — will wane.

Marc Andreessen recently pointed out that Tesla, having created a good car, has an incentive to sell it to as many people as possible. So why wouldn’t AI builders similarly make AI available to as many people as possible? Wouldn’t this keep AI power from becoming concentrated within a small group? I have a different point of view. Tesla sells cars only to people who generate enough economic value, and thus earn enough wages, to afford one. It doesn’t sell many cars to people who have no earning power.

Researchers have analyzed the impact of large language models on labor. While, so far, some people whose jobs were taken by ChatGPT have managed to find other jobs, the technology is advancing quickly. If we can’t upskill people and create jobs fast enough, we could be in for a difficult time. Indeed, since the great decoupling of labor productivity and median incomes in recent decades, low-wage workers have seen their earnings stagnate, and the middle class in the U.S. has dwindled.

Many people derive tremendous pride and sense of purpose from their work. If AI systems advance to the point where most people no longer can create enough value to justify a minimum wage (around $15 per hour in many places in the U.S.), many people will need to find a new sense of purpose. Worse, in some countries, the ruling class will decide that, because the population is no longer important for production, people are no longer important.

What can we do about this? I’m not sure, but I think our best bet is to work quickly to democratize access to AI by (i) reducing the cost of tools and (ii) training as many people as possible to understand them. This will increase the odds that people have the skills they need to keep creating value. It will also ensure that citizens understand AI well enough to steer their societies toward a future that’s good for everyone.

Keep working to make the world better for everyeone!

Andrew


亲爱的朋友们,

从几十或几百年后人工智能可能导致人类灭绝的猜测,到偏见和公平等持续存在的问题可以看出,人工智能引发的风险无处不在。但不要让假设的场景分散我们对解决现实问题的注意力,这一点至关重要。今天我想与大家讨论一个长期风险,我认为这是现实却很少受到关注的:如果人工智能在大多数工作中变得比人类员工更成本更低且能力更优秀,那么人类将不再贡献经济价值。我担心这会导致人权的暗淡。

我们已经看到,许多公民对经济价值贡献不大的国家在维护言论自由、教育、隐私和不受当局不当对待等基本人权方面的表现最差。“资源诅咒”是一种现象,即拥有丰富自然资源(如化石燃料)的国家可能比自然资源较少的类似国家更不民主。根据世界银行的说法,“如果发展中国家高度依赖初级商品,它们将面临更高的暴力冲突和治理不善的风险。”

例如,一个石油资源丰富的国家的统治者(也许会是独裁者)可以雇佣外国承包商开采并出售石油,然后用这笔钱雇佣安全部队来维护自身权力。因此,大多数当地居民不会贡献太多经济价值,统治者也没有动力通过教育、安全和公民权利来确保人口繁荣。

如果几十年后的人工智能系统达到了一定的智能水平,使大量人类无法贡献足够的经济价值,这会导致什么?我担心如果许多人对经济发展变得不重要,如果只有相对较少的人能够接触到可以产生经济价值的人工智能系统,那么照拂人民的动力——尤其是在不那么民主的国家——将会进一步减弱。

Marc Andreessen最近指出,特斯拉既然创造了一款好车,就有动力把它卖给尽可能多的人。那么为什么AI开发者不让尽可能多的人使用AI呢?这难道不会阻止人工智能力量只集中在一小部分人手中吗?对此我有不同的观点。特斯拉只把汽车卖给那些能产生足够经济价值,从而赚到足够的工资来购买汽车的人。特斯拉的目标客户不是那些没有足够赚钱能力的人。

研究人员分析了大型语言模型对劳动力的影响。虽然到目前为止,一些被ChatGPT抢走工作的人已经设法找到了其他工作,但这项技术正在迅速发展。如果我们不能尽快提高人们的技能并创造就业机会,就可能陷入困境。事实上,自从近几十年来劳动生产率与收入中位数大幅脱钩以来,低薪工人的收入一直停滞不前,美国的中产阶级也在减少。

许多人从他们的工作中获得了极大的自豪感和使命感。但如果人工智能系统发展到大多数人无法再创造足够的价值来证明最低工资(美国许多地方的时薪约为15美元)是合理的,那么许多人将需要找到一种新的目标感。更糟糕的是,一些国家的统治阶级会认定:因为人民对生产不再重要,所以人民也不再重要。

我不确定我们能做些什么,但我认为最好的办法是通过降低工具的使用成本和培训尽可能多的人来理解人工智能,从而迅速使人工智能的使用民主化。这将增加人们拥有持续创造价值所需技能的几率。这还将确保公民充分了解人工智能,从而引导社会走向一个对每个人都有利的未来。

请继续努力,让世界变得更美好!
吴恩达

© 版权声明

相关文章

暂无评论

暂无评论...