Associate professor of psychology, Union College; coauthor (with Daniel Simons), The Invisible Gorilla: How Our Intuitions Deceive Us
I’ve often wondered why we human beings have so much trouble thinking straight about machines that think.
In the arts and entertainment, machines that can think are often depicted as simulacra of humans, sometimes down to the shape of the body and its parts, and their behavior suggests that their thoughts are much like our own. But thinking doesn’t have to follow human rules or patterns to count as thinking. Examples of this abound: Chess computers outthink humans not because they think like humans think about chess except better, but because they think in an entirely different way. Useful language translation can be done without deep knowledge of grammar.
Evolution has endowed human beings with the ability to represent and reason about the contents of other human minds. By the time children start school, they can keep track of what different people know about the same set of facts (this is a prerequisite for lying). Later, as adults, we use this capacity to figure out how to negotiate, collaborate, and solve problems for the benefit of ourselves and others. This piece of mental equipment is often called Theory of Mind, and springs into action even in situations where there are no “minds” to represent. Videos of two-dimensional shapes moving around on computer screens can tell stories of love, betrayal, hate, and violence that exist entirely in the mind of the viewer, who temporarily forgets that polygons don’t have emotions.
Maybe we have trouble thinking about thinking machines because we don’t have a correspondingly intuitive Theory of Machine. Mentally simulating a simple mechanical device consisting of a few interlocking gears—say, figuring out whether turning the first gear will cause the last gear to rotate left or right, faster or slower—is devilishly difficult. Complex machines consisting of abstract algorithms and data are just as alien to our built-in mental faculties.
Perhaps this is why, when faced with the notion of thinking machines, we fall back on understanding them as though they were thinking beings—in other words, as though they were humans. We apply the best tools our mind has—namely, Theory of Mind and general-purpose reasoning. Unfortunately, the former isn’t designed for this job and the latter is hampered by our limited capacities for attention and working memory. Sure, we have disciplines like physics, engineering, and computer science that teach us how to understand and build machines, including machines that think, but years of formal education are required to appreciate the basics.
A Theory of Machine module would ignore intentionality and emotion and instead specialize in representing the interactions of different subsystems, inputs, and outputs to predict what machines would do in different circumstances, much as Theory of Mind helps us to predict how other humans will behave.
If we did have Theory of Machine capacities built into our brains, things might be different. Instead, we seem condemned to see the complex reality of thinking machines, which think based on different principles from the ones we’re used to, through the simplifying lens of assuming they’ll be like thinking minds, perhaps reduced or amplified in capacity but essentially the same. Since we’ll be interacting with thinking machines more and more as time goes on, we need to figure out how to develop better intuitions about how they work. Crafting a new module isn’t easy, but our brains did it—by reusing existing faculties in a clever new way—when written language was invented. Perhaps our descendants will learn the skill of understanding machines in childhood as easily as we learned to read.