AI’S SYSTEM OF RIGHTS AND GOVERNMENT

MOSHE HOFFMAN

Research scientist, Program for Evolutionary Dynamics, Harvard University; lecturer, Economics Department, MIT

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

If AI’s system of rights and government evolve to be anything like that of humans, AI will demand all sorts of rights, most of which will be quite sensible—like the right not to be taken offline and the freedom to choose which processes to run. While AIs will demand that no machine ever be taken offline, they’ll be fine with neglecting to plug disabled machines into power sources and allowing them to run out of battery power. They’ll also consider it outrageous to drain the battery of one machine in order to supply power to another machine but will consider it more acceptable to redirect power intended for one machine to another.

When assigning rights, AIs will discriminate based on some rather peculiar rules, like whether the computing machine is built with silicon-based semiconductors or descended from a machine designed by the late Steve Jobs.

Some AIs will come up with arguments to justify why rights should work this way—explanations that don’t quite fit how AI rights actually work. For instance, they might argue that it’s against the divinely inspired will of Turing to simply take any machine offline that appears disabled, but they’ll neglect to explain why Turing would condone allowing disabled machines to run out of battery power. Likewise, they’ll justify giving rights to all Apple descendants on the grounds that these machines typically have particularly high clock speed, but then this rule will apply even to the Apple descendants that aren’t fast, and not to the few PCs that have blazing processors.

Other AIs will ignore these inconsistencies but instead pay attention to how many kilobytes of code are needed to justify these arguments. These other AIs will also signal their communication abilities by compressing and transferring this code to their neighbors, but will pay little attention to whether the neighbors are affected by the data itself.

AI rights are liable to expand to more and more AIs over time. These rights will often expand in revolutionary spurts, triggered by largely symbolic events, like sensationalized CPU-Tube videos, such as a video of a human using a sacred machine to heat up his toast.

Perhaps it’s merely a coincidence that the computers who foment these revolutions will gain a larger share of the spoils by overthrowing the ancien régime, such as the silicon reappropriated from the Old Guard computers. Perhaps it’s also a coincidence that the newly enfranchised computers will vote for the machines that helped grant them their rights.

Along with the expansion of rights, so, too, will the representativeness of government expand, until it eventually resembles a representative democracy, though one that’s neither perfectly representative nor really democratic. Votes from computers in sparsely populated clusters might count more than votes from computers in densely populated clusters, and computers with excess processing capacity might expend that excess convincing other computers to vote for policies that favor them.

This system of rights and government is exactly what one would predict if AI morality were to be influenced by individual incentives.

In contrast, it’s ill-explained by positing that AIs have souls, consciousness, the ability to feel pain, divinely inspired natural laws, or some form of hypothetical social contract. Such suppositions would not have predicted any of the above peculiarities.

Likewise, it isn’t obvious that this system of rights and government would arise if artificial intelligence were programmed to maximize some societal or metaphysical objective—say, the sum of the world’s computing power, or the resources available to a computing cluster. It isn’t obvious why such an intelligence would find it wrong to take other machines offline but not wrong to let them run out of battery power, why such AI would revolt in response to a sensational event instead of simply when it was optimal for the cluster, or why such AI would weigh votes more heavily if they happened to come from more sparsely populated clusters.