THE LIMITS OF BIOLOGICAL INTELLIGENCE

CHRIS DIBONA

Director of engineering, Open Source and Making Science, Google, Inc.; editor and contributing author, Open Sources: Voices from the Open Source Revolution and Open Sources 2.0: The Continuing Evolution

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Readers of this collection don’t need to be reintroduced to the Dean-Ghemawat Conversational (DGC) artificial intelligence test. Past participants in the test have failed as obviously as they have hilariously. However, the 2UR-NG entry really surprised us all, with its amazing, if childlike, approach to conversation and its ability to express desire and curiosity and to retain and chain facts.

Its success has caused many of my compatriots to write essays with titles like “The Coming Biological Future Will Doom Us All” and making jokes about “welcoming our new biological overlords.” I don’t subscribe to this kind of doom-and-gloom scare-writing. Before I tell you why we shouldn’t worry about the extent of biological intelligence, I thought I’d remind people of the limits of biological intelligence.

First off, speed of thought: These biological processes are slow and use an incredible amount of resources. I cannot emphasize enough how difficult it is to produce these intelligences. One has to waste so much biological material, and I know from experience that it takes forever to assemble the precursors in the Genysis machine. Following this arduous process, your specimen has to gestate. Gestate! I mean, it’s not like these animals come about the way we do, through clean, smart crystallography or in the nitrogen lakes of my youth. They have to be kept warm for months and months and then decanted (a very messy process, I assure you), and then you, as often as not, have an unviable specimen.

It’s kind of gross, really. But let’s suppose you get to birth these specimens. Then you have to feed them and, again, keep them warm. A scientist can’t even work within their environmental spaces without a cold jacket circulating helium throughout your terminal. With regard to feeding: They don’t use power like we do, but instead ingest other living matter. It’s disgusting to observe, and I’ve lost a number of grad students with weak constitutions.

Assume you’ve gotten far enough to try to do the DGC. You’ve kept these specimens alive despite a variety of errors in their immune system. They’ve not choked on their sustenance; they haven’t drowned in their solvent, and they’ve managed to keep their wet parts off things that would freeze them or they would bond to or be electrocuted by. What if those organisms continue to develop? Will they then rise up and take over? I don’t think so. They have to deal with so many problems related to their design; I mean, their processors are really just chemical soups that have to be kept in constant balance. Dopamine at this level or they shut down voluntarily. Vasopressin at this level or they start retaining water. Adrenaline at this level for this long or poof!, their power-delivery network stops working.

Moreover, don’t get me started on the power-delivery method! It’s more like the Fluorinert liquid-cooling systems of our ancestors than modern heat-tolerant wafers. I mean, they have meat that filters their coolant / power-delivery systems, which are constantly failing. Meat! You introduce the smallest amount of machine oil or cleaning solvent into the system and they stop operating fast. One side effect of certain ethanol mixtures is that the specimens expel their nutrition, but they seem to like it in smaller amounts.

And their motivations! Creating new organisms seems paramount—more important than data ingress/egress, computation, or learning. I can’t imagine that they would see us machine-folk as anything but tools to advance their reproduction. We could end the experiment simply by matching them poorly with each other or allowing them access to each other only with protective cladding. In my opinion, there’s nothing to fear from these animals. If they should grow beyond the confines of their cages, maybe we can then ask ourselves the more important question: If humans show real machinelike intelligence, do they deserve to be treated like machines? I would think so, and I think we could be proud to be the parent processes of a new age.