NO SHARED THEORY OF MIND

GERALD SMALLBERG

Neurologist, Lenox Hill Hospital, New York City; playwright, Off-Off Broadway productions, Charter Members and The Gold Ring

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

My thinking about this year’s question is tempered by the observation made by Mark Twain in A Connecticut Yankee in King Arthur’s Court: “A genuine expert can always foretell a thing that is 500 years away easier than he can a thing that’s only 500 seconds off.” Twain was being generous: Forget the 500 seconds, we’ll never know with certainty even one second into the future. However, humans can try contemplating the future, which provided Homo sapiens with its great evolutionary advantage. This talent for imagining a future has been the engine of progress, the source of creativity.

We’ve built machines that in simplistic ways are already “thinking” by solving problems or performing tasks we’ve designed. Currently they’re subject to algorithms that follow rules of logic, whether it be “crisp” or “fuzzy.” This intelligence, despite its vast memory and increasingly advanced processing mechanisms, is still primitive. In theory, as these machines become more sophisticated, they’ll at some point attain a form of consciousness, defined for the purpose of this discussion as the ability to be aware of being aware—most likely by combining the properties of silicon and carbon with digital and analog parallel processing and possibly even quantum computing, and with networks that incorporate time delay.

This form of consciousness, however, will be devoid of subjective feelings or emotions. There are those who argue that feelings are triggered by the thoughts and images that have become paired with a particular emotion: Fear, joy, sadness, anger, and lust are examples of emotions; feelings include contentment, anxiety, happiness, bitterness, love, and hatred.

My opinion that machines will lack this aspect of consciousness is based on two considerations. The first is appreciating how we arrived at the ability to feel and have emotions. As human beings, we’re the end product of evolution by natural selection—a process that arose in primitive organisms approximately 3.5 billion years ago. Over this vast eon of time, we’re not unique in the animal kingdom in experiencing feelings and emotions. But in the last 150,000 to 300,000 years, Homo sapiens is singular in having evolved the ability to use language and symbolic thought as part of how we reason, in order to make sense of our experiences and view the world we inhabit. Feeling, emotion, and intellectual comprehension are inexorably intertwined with how we think. Not only are we aware of being aware, but also our ability to think enables us to remember a past and imagine a future. Using our emotions, feelings, and reasoned thoughts, we can form a Theory of Mind so that we can understand the thinking of other people, which in turn has enabled us to share knowledge as we have created societies, cultures, and civilizations.

The second consideration is that machines aren’t organisms, and no matter how complex and sophisticated they become, they won’t have evolved by natural selection. Regardless of how they’re designed and programmed, their possession of feelings and emotions would be counterproductive to what will make them most valuable to us.

The driving force for building advanced intelligent machines will be the need to process and analyze incomprehensible amounts of future information and data to help us ascertain what’s likely to be true from what’s false, what’s relevant from what’s irrelevant. They will make predictions, since they, too, will be able to peer into the future while waiting (as will always be the case) for its cards to be revealed. They’ll have to be totally rational agents in order to perform these tasks accurately and reliably.

In their decision analysis, a system of moral standards will be necessary. Perhaps it will be a calculus incorporating such utilitarian principles as the “The greatest happiness of the greatest number is the measure of right and wrong” along with the Golden Rule, the foundational precept that underlies many religions (“Treat others as one would like others to treat oneself”). The subjective values introduced by feelings and emotions would amount to a self-defeating strategy for solving the complex problems we’ll continue to face as we weigh what’s best for our own species along with the rest of life we share our planet with.

My experience as a clinical neurologist leads me to believe that we’ll be unable to read machines’ thoughts. But also they’ll be incapable of reading ours. There will be no shared Theory of Mind. I suspect the closest we can come to knowing this most complex of states is indirectly, by studying the behavior of these superintelligent machines. They will have crossed that threshold when they start replicating and looking for an energy source solely under their control. If this should occur, and if I’m still around (a highly unlikely expectation), my judgment about whether it presages a utopian or dystopian future will be based on my thinking—biased as always, since it will remain a product of analytical reasoning colored by my feelings and emotions.