WHEN I SAY “BRUNO LATOUR,” I DON’T MEAN “BANANA TILL”

JOHN NAUGHTON

Vice-President, Wolfson College, Cambridge; Emeritus Professor of the Public Understanding of Technology, Open University; author, From Gutenberg to Zuckerberg: What You Really Need to Know About the Internet

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

What do I think about machines that think? Well, it depends what they think about and how well they do it. For decades I’ve been an acolyte of Doug Engelbart, who believed that computers were machines for augmenting human intellect. Power steering for the mind, if you like. He devoted his life to the pursuit of that dream, but it eluded him because the technology was always too crude, too stupid, too inflexible to enable its realization.

It still is, despite Moore’s Law and the rest of it. But it’s getting better, slowly. Search engines, for example, have in some cases become a workable memory prosthesis for some of us. But they’re still pretty dumb. So I can’t wait for the moment when I can say to my computer, “Hey, do you think Robert Nozick’s idea about how the state evolves is really an extreme case of network effects in action?,” and get an answer that’s approximately as good as what I get from the average grad student.

That moment, alas, is still a long way off. Right now, I’m finding it hard to persuade my dictation software that when I say “Bruno Latour,” I don’t mean “Banana till.” But at least the “personal assistant” app on my smartphone knows that when I ask for the weather forecast I get the one for Cambridge, U.K., rather than Cambridge, MA.

But this is pathetic stuff, really, when what I crave is a machine that can function as a proper personal assistant, something that can enable me to work more effectively. Which means a machine that can think for itself. How will I know when the technology is good enough? Easy: when my artificially intelligent, thinking personal assistant can generate plausible excuses that get me out of doing what I don’t want to do.

Should I be bothered by the prospect of thinking machines? Probably. Certainly Nick Bostrom thinks I should. Our focus on getting computers to exhibit human-level intelligence is, he thinks, misguided. We view machines that can pass the Turing Test as the ultimate destination of Doug Engelbart’s quest. But Bostrom thinks that passing the test is just a waypoint on the road to something much more worrying. “The train,” he says, “might not pause or even decelerate at Humanville Station. It is likely to swoosh right by.”4 He’s right: I should be careful what I wish for.