WE NEED TO DO OUR HOMEWORK

JAAN TALLINN

Cofounder, Centre for the Study of Existential Risk, Future of Life Institute; founding engineer, Skype, Kazaa

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Six months before the first nuclear test, the Manhattan Project scientists prepared a report called LA-602. It investigated the chances of nuclear detonation having a runaway effect and destroying the Earth by burning up the atmosphere. This was probably the first time scientists performed an analysis to predict whether humanity would perish as a result of a new technological capability—the first piece of existential-risk research.

Of course, nuclear technology did not remain the last dangerous technology that humans have invented. Since then, the topic of catastrophic side effects has repeatedly come up in different contexts: recombinant DNA, synthetic viruses, nanotechnology, and so on. Luckily for humanity, sober analysis has usually prevailed and resulted in various treaties and protocols to steer the research.

When I think about the machines that can think, I think of them as technology that needs to be developed with similar (if not greater!) care. Unfortunately, the idea of AI safety has been more challenging to popularize than, say, biosafety, because people have rather poor intuitions when it comes to thinking about nonhuman minds. Also, if you think about it, AI is really a metatechnology: technology that can develop further technologies, either in conjunction with humans or perhaps even autonomously, thereby further complicating the analysis.

That said, there has been encouraging progress over the last few years, exemplified by the initiatives of new institutions, such as the Future of Life Institute, which have assembled leading AI researchers to explore appropriate research agendas, standards, and ethics.

Therefore, complicated arguments by people trying to sound clever on the issue of AI thinking, consciousness, or ethics are often a distraction from the trivial truth: The only way to ensure that we don’t accidentally blow ourselves up with our own technology (or metatechnology) is to do our homework and take relevant precautions—just as those Manhattan Project scientists did when they prepared LA-602. We need to set aside the tribal quibbles and ramp up the AI safety research.

By way of analogy: Since the Manhattan Project, nuclear scientists have moved on from increasing the power extracted from nuclear fusion to the issue of how to best contain it—and we don’t even call that nuclear ethics.

We call it common sense.