COULD THINKING MACHINES BRIDGE THE EMPATHY GAP?

MOLLY CROCKETT

Associate professor, Department of Experimental Psychology, University of Oxford; Wellcome Trust Postdoctoral Fellow, Wellcome Trust Centre for Neuroimaging, University College London

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

We humans are sentenced to spend our lives trapped in our own heads. Try as we might, we can never truly know what it’s like to be someone else. Even the most empathetic among us will inevitably encounter an unbridgeable gap between self and other. We may feel pangs of distress upon seeing someone stub their toe or learning of another’s heartbreak. But these are mere simulations; others’ experiences can never be directly felt and so can never be directly compared with our own. The empathy gap is responsible for most interpersonal conflicts, from prosaic quibbles over who should wash the dishes to violent disputes over sacred land.

This problem is especially acute in moral dilemmas. Utilitarian ethics stipulates that the basic criterion of morality is maximizing the greatest good for the greatest number—a calculus requiring the ability to compare welfare, or “utility,” across individuals. But the empathy gap makes such interpersonal utility comparisons difficult if not impossible. You and I may both claim to enjoy champagne, but we’ll never be able to know who enjoys it more, because we lack a common scale for comparing these rather subjective values. As a result, we have no empirical basis for determining which of us most deserves the last glass. Jeremy Bentham, the father of utilitarianism, recognized this problem: “One man’s happiness will never be another man’s happiness; a gain to one man is no gain to another. You might as well pretend to add twenty apples to twenty pears.”

Human brains cannot solve the interpersonal utility comparison problem. Nobel laureate John Harsanyi worked on it for a couple of decades in the middle of the twentieth century. His theory is recognized as one of the best attempts so far, but it falls short because it fails to account for the empathy gap. Harsanyi’s theory assumes perfect empathy, wherein my simulation of your utility is identical to your utility. But the fallibility of human empathy is indisputable in the face of psychology research and our own personal experience.

Could thinking machines be up for the job? Bridging the empathy gap would require a way to quantify preferences and translate them into a common currency, comparable across individuals. Such an algorithm could provide an uncontroversial set of standards that could be used to create better social contracts. Imagine a machine that could compute an optimal solution for wealth redistribution by accounting for the preferences of everyone subject to taxation, weighing them equally and comparing them accurately. Although the shape of the solution is far from clear, its potential benefits are self-evident.

Machines that can bridge the empathy gap could also help us with self-control. In addition to the empathy gap between self and others, there exists a similar gap between our present and future selves. Self-control problems stem from the never-ending tug-of-war between current and future desires. Perhaps AI will one day end this stalemate by learning the preferences of our present and future selves, comparing and integrating them, and making behavioral recommendations on the basis of the integration. Think of a diet healthy enough to foster weight loss but just tasty enough so you’re not tempted to cheat, or an exercise plan challenging enough to improve your fitness but just easy enough so you can stick with it.

Neuroscientists are uncovering how the human brain represents preferences. We should keep in mind that AI preferences needn’t resemble human ones and, indeed, may require a different code altogether if they’re to tackle problems human brains can’t solve. Ultimately, though, the code will be up to us, and what it should look like is as much an ethical question as a scientific one. We’ve already built computers that can see, hear, and calculate better than we can. Creating machines that are better empathizers is a knottier problem—but achieving this feat could be essential to our survival.