THE FUTURE POSSIBILITY-SPACE OF INTELLIGENCE

MELANIE SWAN

Philosopher; science and technology innovator, MS Futures Group; founder, DIYgenomics

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Considering machines that think is a nice step forward in the AI debate, as it departs from our human-based concerns and accords machines otherness in a productive way. It causes us to consider the other entity’s frame of reference. But even more important, this questioning suggests a large future possibility-space for intelligence. There could be “classic,” unenhanced humans; enhanced humans (with nootropics, wearables, brain-computer interfaces); neocortical simulations; uploaded-mind files; corporations as digital abstractions; and many forms of generated AI—deep-learning meshes, neural networks, machine-learning clusters, blockchain-based distributed autonomous organizations, and empathic compassionate machines. We should consider the future world as one of multispecies intelligence.

What we call the human function of “thinking” could be quite different in the variety of possible future implementations of intelligence. The derivation of various species of machine intelligence will necessarily be different from that of humans. In humans, embodiment and emotion have been important elements influencing human thinking. Machines won’t have the evolutionary biological legacy of being driven by resource acquisition, status garnering, mate selection, and group acceptance—at least in the same way. Therefore, species of native machine “thinking” could be quite different. Rather than asking if machines can think, it may be more productive to move from the frame of “thinking” that asks “Who thinks how?” to a world of digital intelligences with different backgrounds, different modes of thinking and existence, and different value systems and cultures.

Already, not only are AI systems becoming more capable, but also we’re getting a sense of the properties and features of native machine culture and the machine economy—and of what the coexistence of human and machine systems might be like.

Some examples of these parallel systems are in law and personal identity. In law, there are technologically binding contracts and legally binding contracts. They have different enforcement paradigms: inexorably executing parameters in the case of code (“code is law”) and discretionary compliance in the case of human-partied contracts. Code contracts are good in that they cannot be breached, but on the other hand they’ll execute monolithically even if conditions change.

With regard to personal identity: The technological construct of identity and its social construct are different and have different implied social contracts. The social construct of identity includes the property of imperfect human memory that allows for forgiving, forgetting, redemption, and reinvention. Machine memory, however, is perfect and can act as a continuous witnessing agent, never forgiving or forgetting and always able to re-present even the smallest detail at any time. Technology itself is dual-use, in that it can be deployed for good or evil. Perfect machine memory becomes tyrannizing only when re-imported to static human societal systems, but it needn’t be restrictive. This new “fourth-person perspective” could be a boon for human self-monitoring and mental-performance enhancement.

These examples show that machine culture, values, operation, and modes of existence are already different, and this emphasizes the need for ways to interact that facilitate and extend the existence of both parties. The potential future world of intelligence multiplicity means accommodating plurality and building trust. Blockchain technology—a decentralized, distributed, global, permanent, code-based ledger of interaction transactions and smart contracts—is one example of a trust-building system. The system can be used between human parties or interspecies parties, exactly because it’s not necessary to know, trust, or understand the other entity, just the code (the language of machines).

Over time, trust can grow through reputation. Blockchain technology could be used to enforce friendly AI and mutually beneficial interspecies interaction. Someday, important transactions (like identity authentication and resource transfer) will be conducted on smart networks that require confirmation by independent consensus mechanisms, such that only bona fide transactions by reputable entities are executed. While perhaps not a full answer to the problem of enforcing friendly AI, decentralized smart networks like blockchains are a system of checks and balances helping to provide a more robust solution to situations of future uncertainty.

Trust-building models for interspecies digital intelligence interaction could include both game-theoretic checks-and-balances systems like blockchains and also, at the higher level, frameworks that put entities on the same plane of shared objectives. This is of higher order than smart contracts and treaties that attempt to enforce morality; a mind-set shift is required. The problem frame of machine and human intelligence should not be one that characterizes relations as friendly or unfriendly but, rather, one that treats all entities equally, putting them on the same ground and value system for the most important shared parameters, like growth. What’s most important about thinking for humans and machines is that thinking leads to ideation, progress, and growth.

What we want, for both humans and machines, is the ability to experience, grow, and contribute more, with the two in symbiosis and synthesis. This can be conceived as all entities existing on a spectrum of capacity for individuation (the ability to grow and realize one’s full potential). Productive interaction between intelligent species could be fostered by being aligned in the common framework of a capacity spectrum that facilitates their objective of growth, and maybe mutual growth.

What we should think about thinking machines is that we want greater interaction with them, both quantitatively or rationally and qualitatively, in the sense of extending our internal experience of ourselves and reality, moving forward together in the vast future possibility-space of intelligence.