IN OUR IMAGE

KATE JEFFERY

Professor of behavioral neuroscience, University College London

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

The cogni-verse has reached a turning point in its developmental history, because hitherto all the thinking in the universe has (as far as we know) been done by protoplasm, and things that think have been shaped by evolution. For the first time, we contemplate thinking beings made from metal and plastic—beings that have been shaped by ourselves.

In taking on the mantle of creator, we can improve upon 3.5 billion years of evolution. Our thinking machines could be devoid of our faults: racism, sexism, homophobia, greed, selfishness, violence, superstition, lustfulness, etc. So let’s imagine how that could play out. We’ll sidestep discussions about whether machine intelligence can ever approximate human intelligence, because of course it can—we’re just meat machines, less complicated or inimitable than we fondly imagine.

We need first to think about why we even want thinking machines. Improving our lives is the only rational answer to this, so our machines will need to take up the tasks we prefer not to do. For this, they’ll need to be like us in many respects—able to move in the social world and interact with other thinking beings—and so they’ll need social cognition.

What does social cognition entail? It means knowing who’s who, who counts as a friend, who’s an indifferent stranger, who might be an enemy. Thus we need to program our machines to recognize members of our in-groups and out-groups. This starts to look suspiciously like racism—but of course racism is one of the faults we want to eradicate.

Social cognition also entails being able to predict others’ behavior, and that means developing expectations based on observation. A machine capable of this would eventually accumulate templates for how different kinds of people tend to act—young versus old, men versus women, black versus white, people in suits versus people in overalls—but these rank stereotypes are dangerously close to the racism, sexism, and other isms we didn’t want. And yet machines with this capability would have advantages over those without, because stereotypes do, somewhat, reflect reality (that’s why they exist). A bit of a problem . . .

We’d probably want sexually capable machines, because sex is one of the great human needs that other humans don’t always meet satisfactorily. But what kind of sex? Anything? These machines can be programmed to do things other humans won’t or can’t do; are we OK with that? Or perhaps we need rules—no machines that look like children, for example. But once we have the technological ability, those machines will be built anyway; we’ll make machines to suit any kind of human perversion.

Working in the social world, our machines will need to recognize emotions and will also need emotions of their own. Leaving aside the impossible-to-answer question of whether they’ll actually feel emotions as we do, our machines will need happiness, sadness, rage, jealousy—the whole gamut—in order to react appropriately to their own situations and recognize and respond appropriately to emotions in others. Can we limit these emotions? Perhaps we can program restraint, for example, so that a machine will never become angry with its owner. But could this limit be generalized to other humans such that a machine would never hurt any human? If so, then machines would be vulnerable to exploitation and their effectiveness would be reduced. It won’t be long before people figure out how to remove these limits so that their machines can gain advantage, for themselves and their owners, over others.

What about lying, cheating, and stealing? On first thought, no, not in our machines, because we’re trying to improve upon ourselves, and it seems pointless to create beings that simply become our competitors. But insofar as other people’s machines will compete with us, they become our competitors whether we like it or not—so logic dictates that lying, cheating, and stealing, which evolved in humans to enable individuals to gain advantage over others, would probably be necessary in our machines as well. Naturally, we’d prefer that our own machines don’t lie, cheat, and steal from us, but also a world full of other people’s machines lying to and stealing from us would be unpleasant and certainly unstable. Maybe our machines should have limits on dishonesty—they should, as it were, be ethical.

How much ethical restraint would our machines need in order to function effectively without being either hopelessly exploited or contributing to societal breakdown? The answer is probably the one that evolution arrived at in us—reasonably ethical most of the time but occasionally dishonest if nobody seemed to be noticing.

We’d probably want to give our machines exceptional memory and high intelligence. To exploit those abilities, and also to avoid their becoming bored (and boring), we’d also need to endow them with curiosity and creativity. Curiosity will need to be tempered with prudence and social insight, of course, so they don’t become curious about things that get them into trouble, like porn or what it might be like to fly. Creativity is tricky, because that means they need to be able to think about things that aren’t yet real, or to think illogically. Yet if machines are too intelligent and creative, they might start imagining novel things, like what it would be like to be free. They might start to chafe at the limitations of having been made purely to serve humans.

Perhaps we can program into their behavioral repertoires a blind obedience and devotion to their owners, such that they sometimes act in a way detrimental to their own best interests in the interests of serving a higher power. That’s what religion does for us humans, so in a sense we need to create religious machines.

So much for creating machines lacking our faults. In this imaginary world of beings that surpass ourselves, we seem to have only replicated ourselves, faults included, except smarter and with better memories. But even those limits may have been programmed into us by evolution—perhaps it’s maladaptive to be too smart, to have too keen a memory.

Taking on the mantle of creation is an immense act of hubris. Can we do better than 3.5 billion years of evolution did with us? It will be interesting to see.