WHO’S AFRAID OF ARTIFICIAL INTELLIGENCE?

RICHARD H. THALER

Father of behavioral economics; director, Center for Decision Research, University of Chicago Booth School of Business; author, Misbehaving: The Making of Behavior Economics

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

My brief remarks on this question are framed by two one-liners that happen to have been uttered by brilliant Israelis. The first comes from my friend, colleague, and mentor Amos Tversky. When asked once what he thought about artificial intelligence, Amos quipped that he didn’t know much about it, his specialty was natural stupidity. (Before anyone gets on their high horse, Amos didn’t actually think people were stupid.)

The second joke comes from Abba Eban, who was best known in the United States when he served as Israel’s ambassador to the United Nations. Eban was once asked if he thought Israel would switch to a five-day workweek. Nominally, the Israeli workweek starts on Sunday morning and goes through midday Friday, though a considerable amount of the “work” done during those five-and-a-half days appears to take place in coffeehouses. Eban’s reply to the query was, “One step at a time. First, let’s start with four days, and go from there.”

These jokes capture much of what I think about the risks of machines taking over important societal functions and then running amok. Like Tversky, I know more about natural stupidity than about artificial intelligence, so I have no basis for forming an opinion about whether machines can think, and if so, whether such thoughts would be dangerous to humans. I leave that debate to others. Like anyone who follows financial markets, I’m aware of incidents such as the Flash Crash in 2010, when poorly designed trading algorithms caused stock prices to fall suddenly, only to recover a few minutes later. But this example is more an illustration of artificial stupidity than hyperintelligence. As long as humans continue to write programs, we’ll run the risk that some important safeguard has been omitted. So, yes, computers can screw things up, just like humans with “fat fingers” can accidently issue an erroneous buy or sell order for gigantic amounts of money.

Nevertheless, fears about computers taking over the world are premature. More disturbing to me is the stubborn reluctance in many segments of society to allow computers to take over tasks that simple models perform demonstrably better than humans. A literature pioneered by psychologists such as the late Robyn Dawes finds that virtually any routine decision-making task—detecting fraud, assessing the severity of a tumor, hiring employees—is done better by a simple statistical model than by a leading expert in the field. Let me offer just two illustrative examples, one from human-resource management and the other from the world of sports.

First, let’s consider the embarrassing ubiquity of job interviews as an important, often the most important, determinant of who gets hired. At the University of Chicago Booth School of Business, where I teach, recruiters devote endless hours to interviewing students on campus for potential jobs—a process that selects the few who will be invited to visit the employer, where they will undergo another extensive set of interviews. Yet research shows that interviews are nearly useless in predicting whether a job prospect will perform well on the job. Compared to a statistical model based on objective measures such as grades in courses relevant to the job in question, interviews primarily add noise and introduce the potential for prejudice. (Statistical models don’t favor any particular alma mater or ethnic background and cannot detect good looks.)

These facts have been known for more than four decades, but hiring practices have barely budged. The reason is simple: Each of us just knows that if we are the one conducting an interview, we will learn a lot about the candidate. It might well be that other people are not good at this task, but I am! This illusion, in direct contradiction to empirical research, means that we continue to choose employees the same way we always did. We size them up, eye to eye.

One domain where some progress has been made in adopting a more scientific approach to job-candidate selection is sports, as documented by the Michael Lewis book and movie Moneyball. However, it would be a mistake to think there has been a revolution in how decisions are made in sports. It’s true that most professional sports teams now hire data analysts to help them evaluate potential players, improve training techniques, and devise strategies. But the final decisions about which players to draft or sign, and whom to play, are still made by coaches and general managers, who tend to put more faith in their gut than in the resident geek.

An example comes from American football. David Romer, an economics professor at Berkeley, published a paper in 2006 showing that teams choose to punt far too often, rather then trying to “go for it” and get a first down, or score.13 Since the publication of his paper, his analysis has been replicated and extended with much more data, and the conclusions have been confirmed. The New York Times even offers an online “bot” that calculates the optimal strategy every time a team faces a fourth-down situation.

But have coaches caught on? Not at all. Since Romer’s paper was published, the frequency of going for it on fourth down has been flat. Coaches, who are hired by owners based in part on interviews, still make decisions the way they always have.

So pardon me if I don’t lose sleep worrying about computers taking over the world. Let’s take it one step at a time, and see if people are willing to trust them to make the easy decisions at which they’re already better than humans.