ORGANIC VERSUS ARTIFACTUAL THINKING

JUNE GRUBER

Assistant professor of psychology, University of Colorado, Boulder

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

RAUL SAUCEDO

Assistant professor of philosophy, University of Colorado, Boulder

Organisms are machines (broadly understood, anyway). Thus, since we as humans are thinking organisms, we’re machines that think—we’re organic thinking machines, as arguably are a variety of nonhuman animals. Some machines are artifacts rather than organisms, and some of them arguably think (again, broadly understood). Such things are artifactual thinking machines; computers and the like are examples.

An important question is whether there’s a deep ontological divide between organisms and artifacts generally. But rather than addressing this directly, we’d like to ask a different, albeit related, question: Are there deep differences between the kind of thinking organisms exhibit and the kind that thinking artifacts like machines can do—between organic and artifactual thinking? This isn’t a question about the definition of words like think, thinking, and thought. There’s little depth to the question of whether, for instance, the information input, processing, and output that computers are capable of is or ought to be captured by such terms. Rather, the issue is whether what things like us do and things like computers do—call those activities or capacities or what you will—are categorically different.

Recent empirical findings in affective science, coupled with recent philosophical theorizing, suggest a deep divide indeed. Suppose you’re on a hike and encounter a mountain lion. What’s going on with you at a psychological level? If you’re like most of us, you entertain a rapid of stream of thoughts: “I’m going to die,” “This is really bad luck,” “I need to stay calm,” “I should have read more on what to do in this kind of situation,” and so on. And you also have a myriad of feelings—surprise, fear, and so on. So you have some cognitive goings-on and some affective goings-on.

Recent work in psychology and philosophy suggests that the cognitive and the affective are deeply unified. Not only may one influence another to a lesser or greater degree in a variety of contexts, but there is in fact a single cognitive/affective process, underlying the appearance of two parallel and interacting processes, that can be teased apart. Lots of the kind of “thinking” we normally do is holistic in this way; the kind of information processing we normally engage in is cognitive/affective rather than purely cognitive. To the extent that we can extract a purely cognitive process, it’s merely derived from the more basic unified process. This is not a system 1 versus system 2 distinction, where the former is largely automatic and unconscious and the latter explicit and deliberate. The suggestion is, rather, that processes at the level of both system 1 and system 2 are themselves holistic—that is, cognitive/affective.

There’s no good evidence (at this point, anyway) that artifactual thinking machines are capable of this kind of cognitive/affective information processing. There is good evidence that they may become better at what they do, but they simply don’t process information via the unified affective/cognitive processes that characterize us. The information processing they engage in resembles only part of our unified processing. This isn’t to say that things like computers can’t feel and therefore can’t think, but rather that the kind of thinking they do is categorically different from ours.