ROBODOCTORS

GERD GIGERENZER

Psychologist; director, Center for Adaptive Behavior and Cognition, Max Planck Institute for Human Development, Berlin; author, Risk Savvy: How to Make Good Decisions

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

It’s time for your annual checkup. Entering your doctor’s office, you shake her cold hand, the metal hand of a machine. You’re face-to-face with an RD, a certified robodoctor. Would you like that? No way, you might say. I want a real doctor, someone who listens to me, talks to me, and feels like me. A human being whom I can trust, blindly.

But think for a moment. In fee-for-service health care, a primary care physician may spend no more than five minutes with you. And during this short time, astonishingly little thinking takes place. Many doctors complain to me about their anxious, uninformed, noncompliant patients with unhealthy lifestyles who demand drugs advertised by celebrities on television and, if something goes wrong, threaten to turn into plaintiffs.

But lack of thinking doesn’t simply affect patients: Studies consistently show that most doctors don’t understand health statistics and thus cannot critically evaluate a medical article in their own field. This collective lack of thinking has its toll. Ten million U.S. women have had unnecessary Pap smears to screen for cervical cancer—unnecessary because they’d had a full hysterectomy and thus no cervix anymore. Every year, 1 million U.S. children have unnecessary CT scans, which expose them to radiation levels that cause cancer in some of them later in life. And many doctors ask men to undergo regular PSA screening for prostate cancer, despite the fact that virtually all medical organizations recommend against it because it has no proven benefit but can cause severe harm. Scores of men end up incontinent and impotent from subsequent surgery or radiation. All this adds up to a huge waste of doctors’ time and patients’ money.

So why don’t doctors always recommend what’s best for the patient? There are three reasons. First, as noted, some 70 to 80 percent of physicians don’t understand health statistics. The cause? Medical schools across the world fail to teach statistical thinking. Second, in fee-for-service systems, doctors have conflicts of interest: They lose money if they don’t recommend tests and treatments, even if these are unnecessary or harmful. Third, more than 90 percent of U.S. doctors admit to practicing defensive medicine—that is, recommending unnecessary tests and treatments that they wouldn’t recommend to their own family members. They do this to protect themselves against you, the patient, who might pursue litigation. Thus, a doctor’s office is packed with psychology that gets in the way of good care—self-defense, innumeracy, and conflicting interests. This threefold malady is known as the SIC syndrome. It undermines patient safety.

Does it matter? Based on data from 1984 and 1992, the Institute of Medicine estimated that some 44,000 to 98,000 patients die from preventable and documented medical errors every year in U.S. hospitals. Based on more recent data, from 2008 to 2011, Patient Safety America has updated this death toll to more than 400,000 per year. Nonlethal serious harm caused by these preventable errors occurs in an estimated 4 to 8 million Americans every year. The harm caused in private practice is not known. If fewer and fewer doctors have less and less time for patients and patient safety, this epidemic of harm will continue to spread. Ebola pales compared to it.

A revolution in health care is wanted. Medical schools should teach students the basics of health statistics. Legal systems should no longer punish doctors if they rely on evidence rather on convention. We also need incentive systems that don’t force doctors to choose between making a profit and providing the best care for the patient. But this revolution hasn’t happened, and there are few signs that it will.

So why not resort to a radical solution: robodoctors who understand health statistics, have no conflicts of interest, and aren’t afraid of being sued (after all, they don’t have to repay medical school debts and have no bank accounts to protect from litigation)? Let’s go back to your annual checkup. You might ask the RD whether checkups reduce mortality from cancer, from heart disease, or from any other cause. Without hedging, the RD would inform you that a review of all existing medical studies shows that the answer is no, on all three counts. You might not want to hear that, because you’re proud of conscientiously going for routine checkups after hearing the opposite from your human doctor, who may have had no time to keep up with medical science. And your RD would not order unnecessary CTs for your child, or Pap smears if you’re a woman without a cervix, or recommend routine PSA tests without explaining the pros and cons if you’re a man. Moreover, they can talk to multiple patients simultaneously and thus give you as much time as you need. Waiting time will be short and nobody will rush you out the door.

When we imagine thinking machines, we tend to think about better technology—about devices for self-monitoring blood pressure, cholesterol, or heart rate. My point is different. The RD revolution is less about better technology than about better psychology. That is, it entails thinking more about what’s best for the patient and striving for best care instead of best revenues.

OK. Your next objection is that for-profit clinics will easily undercut this vision of propatient robots and program RDs so that they maximize profit rather than your health. You’ve put your finger on the essence of our health-care malady. But there’s a psychological factor that will likely help. Patients often don’t ask questions in consultations with human MDs because they rely on the dictum “Trust your doctor.” But that rule doesn’t necessarily apply to machines. After shaking an RD’s icy hand, patients may well begin to think for themselves. Making people think is the best that a machine can achieve.