INDEX

Chapter 11
ARTIFICIAL INTELLIGENCE

Discussions about artificial intelligence (AI) are frequent in many contexts, not least in those that are treated in this book. That's why I've given AI a chapter of its own.

AI is a multi-disciplinary science, encompassing electronics, computer science, psychology, sociology, philosophy, religion, medicine, and mathematics. This is by no means an exaggeration; creating AI entails knowing how "normal" intelligence works, which is easier said than done - since the only object we know with certainty to be intelligent is the human brain. AI ultimately concerns the study of behavioral sciences in order to build models based on natural science. Our intelligence, it has been discovered, is strongly connected to our way of knowing the world, or our perception.

AI research is a hot item at the universities, and not without reason: for the first time in history, there is money to be made in AI. Companies that are increasingly employing electronic means for communication and administration are in need of computer programs to handle routine tasks, like sorting electronic mail or maintaining inventory. So-called intelligent agents are marketed, customized for various standardized electronic tasks. From a cynical perspective, one could say that industry for the first time can replace thinking humans with machines in areas no one had thought could be automated. (I should add that it can hardly be called automation, since the truly intelligent programs actually think, as opposed to just acting according to a list of rules).

There is a number of approaches and orientations within AI. Among the most prominent there are: expert systems (large databases containing specific knowledge), genetic algorithms (simulated evolution of mathematical formulas, for example, to suit a certain purpose), and neural networks (imitation of the organizational structure of the brain, using independent, parallel-processing nerve cells). As information databases like those on the Internet become larger and more numerous, agents can work directly with the information without having to understand people. Why assign a person to do research when you might as well let an agent do it, more quickly and for less money? (Whoever has ever looked for information on the Internet will realize how useful a more intelligent search tool would be).

There is also research in the field of artificial life, which really are "living" organisms that live and reproduce in computer systems. Computer viruses constitute one form of artificial life, albeit somewhat unsophisticated and destructive. Artificial life has hitherto not achieved any substantial success. (Unless you want to view computer viruses and all the companies and consultants that make a living fighting them as a success - they have evidently boosted GNP). Research in the field of artificial life began with a program called Life, by John Conway, and was a mix between a computer game and calculated simulation. Bill Gosper, hacker at MIT, became virtually obsessed with this simulation. Later on, it was improved and renamed Core Wars, the idea being that many small computer programs would try to expand and fight over system memory (core memory), with the strongest ones surviving. The programs are exposed to various environmental factors similar to the demands put on real life: lonely or overcrowded individuals die, programs are exposed to mutation risks, system resources vary with time (daily rhythms), aging organisms die, etc. Tom Ray has been especially successful in the field with his Tierra program. His artificial life forms have, through simulated darwinistic evolution, managed to develop programming solutions to certain specific problems that were better than anything man-made.

I have already mentioned that hackers have a respect for artificial intelligence that is completely different from that of people in general. A person growing up constantly surrounded by computers does not see anything threatening in the fact that machines can think. He/she sees the denunciation of AI as a sort of racism directed towards a certain life form. If you criticize artificial intelligence, saying that it can never be the same, only humans can think, etc., then consider the fact that there is no scientific basis whatsoever for supposing that the human brain is anything but a machine, although it may be made of flesh.

These thoughts date back to Ada Lovelace and Charles Babbage, two of the progenitors of computers, who discuss the subject in a piece called Thinking Machines, published in the 19th century. However, these ideas did not become widely known until the 1960's, through films such as the horror movie Colossus - The Forbins Project (1969), in which intelligent military computers take over the world. This notion also figures in the Terminator films, with the only significant difference being that the computer's name is Skynet - thus, not much new under the sun in popular sci-fi. The fear of artificial intelligence actually dates all the way back to Mary Shelley's Frankenstein (1818), and perhaps even further back in history.

In the fiction of Frankenstein, the fear of AI is personified. This story, about a scientist who creates a lethal intelligence, has become one of the new symbols of the industrialized world, in the same class as early Greek mythology. There is a connection between the Bible and Frankenstein, in that the creation (mankind in the Book of Moses, the monster in Frankenstein) rebel against its creator (God and human, respectively). In Judaic mythology there is a corresponding myth about the clay-man Golem, who runs amok when its master forgets to control the creature. It has occurred to me how far ahead of its time this myth was: Golem was made of clay, and computers are made of silicone, which is made from sand. The maker of Golem, Rabbi Löw, feeds a piece of parchment with the name of God on it into the creature's mouth, in order to make it "run". This is comparable to the engineer "feeding" software into the computer. To stop the runaway Golem, the Rabbi removes the parchment from its mouth, whereupon the creature collapses into a pile of dried mud, robbed of its spark of life.

Thus, the fear that mankind - like God - will create intelligent life from dead matter is found as early as in the two 19th-century myths described above. This rather unfounded fear of rebellion against God makes up the foundation of much of the hostility directed towards AI research. The fear is based in the Biblical myth of Adam and Eve eating the forbidden fruit, and the possibility that another creation will follow in our footsteps. I will, however, overlook these myths, and instead focus the argument on the philosophy underlying AI-research: Pragmatism with its heritage of Fallibilism, Nihilism, and Zen-philosophy. (Don't let these strange word discourage you from reading on!)

One could ask why scientists promptly have to try to create artificial intelligence. After all, there are already people, so why attempt to create something new, better, something alien? Asking this question of a scientist in the field is akin to asking a young couple why they promptly want to have children. Why raise a new generation that will question everything you have constructed? The answer is that it's something that simply just happens, or is done: it is a challenge, a desire to create something that will live on, an instinct for evolution. This is perhaps also what partly motivates hackers to create computer viruses: the pleasure of seeing something grow and propagate.

Our entire society and our lives are so interlinked that they cannot be separated. Society, machines, and humanity - everything has to progress. Evolution doesn't allow any closed doors, and AI is, in my view, only another step on the path of evolution. I see this as something positive, while others are terrified. At the same time, one shouldn't forget to note the commercial interests underlying the expansion of AI. Computers reading forms, sorting information, and distributing it, is obviously simply another way for the market to "rationalize" people out of the production chain, automating clerical work, and making the secretary and the accountant obsolete. The board of directors of a corporation is, as usual, only interested in making money and accumulating capital. Wouldn't you? What is the hidden nature of this complex entity (or as I would refer to it, superentity) that we call "the market", and which constantly drives this process of development forward?

If you are interested in knowing more about AI and its philosophical aspects, it is to your advantage to read a book called The Intelligent Age of Machines, by Raymond Kurzweil (1990). To learn more about the inner workings of AI, read Douglas Hofstadter's Gödel, Escher, Bach: An Eternal Golden Braid, which is both an elevating and depressing work. In one respect, it is a scientific validation of Kafka's thesis: to correctly comprehend something and at the same time misunderstand it are not mutually exclusive, which is an observation that (fascinatingly enough) is akin to the paradoxes within Zen Buddhism, a religion that in some aspects border on pure philosophy. To explain some of AI's mechanisms, I need to explain some things about the part of Zen that is associated with philosophers like Mumon, and which has less to do with sitting around in a lotus position and meditating all day. Zen, in itself, is a philosophy that can be dissociated from Buddhism and viewed separately. Buddhism is based on respect for life, in all its forms, Zen, by itself, makes no such demands, being a non-normative, non-religious philosophy.

Zen, or the Art of Breaking Out of Formal Systems
Zen has also become one of the most influential "new age" philosophies in the West during the 80's and 90's. Books like Zen and the Art of Motorcycle Maintenance sell amazingly well, among other things, Zen Buddhism suggests that the entity that Western tradition calls God (and what the Buddhists call the Brahma of the Buddha) is in fact a sum of all the independent processes in the universe, and not a sentient force. Therefore, God is equally present in the souls of humans as in the circuits of a computer or the cylinder shafts of a motorcycle. Put simply, Zen is one long search for the connection between natural processes, in the cosmos or the microcosmos, and this search in itself constitutes a process that interfaces with the others. Zen Buddhism is the search in itself, the point being that Zen (an abstract term for "the answer") will never be found. Searching for Zen means that one continually come to a point where one answers a question with both yes and no. For example:

Q: Is the ball in the bottle?
A: In one way, yes, if the bottle's inside is its inside, and in one way, no, if the bottle's outside is its inside.

Zen constantly toys with our way of defining our environment, our method of labeling things as well as people. Zen teaches us to see through the inadequacies of out own language and assists us in dismantling fallacious systems, as in when, for example, we've gotten the idea that all criminals are swarthy (or that all hackers break into computer systems!). Zen is the thesis that no perfect formal systems exist, that there is no perfect way of perceiving reality. Kurt Gödel, the mathematician, proved that there are no perfect systems within the natural sciences, and the fact that there are no perfect systems within religion should be apparent to anyone who isn't a fundamentalist.

Zen could be said to be based on the following supposition: The only absolute truth is that there are no absolute truths. A paradox! - which is, naturally, a perfect starting point for the thesis that reality cannot be captured and all formal systems (like human language, mathematics, etc.) must contain errors. Even the proposition that reality is incomplete is incomplete! Truth cannot be fully expressed in words - hence the necessity of art and other forms of expression. I will end the discussion of Zen now, but hopefully you understand that many become confused and annoyed when one tries to explain Zen, given that the explanation is that there is no explanation. For example, note a quote by William S. Burroughs: "language is a virus from space", expressing his frustration with the limitations of human language. Even Nietzsche criticized language, finding it hopelessly limited, and feminist Dorothy Smith has a theory concerning the use of language to control the distribution of power in society.(1) In the Western philosophical tradition, the equivalent of Zen is called Fallibilism, a philosophy based on the theory that all knowledge is preliminary. This has subsequently been developed into a philosophical theory called pragmatism, which views all formal systems as fallible, and thus judges them based on function rather than construction. Gödel's Incompleteness Theorem is probably the most tangible indication that this conception of the world is correct.(2)

A lot of modern mathematical theory of so-called non-formal systems are associated with both Zen and Chaos theory. A non-formal system creates a formal system to solve a problem. In order to have a chance of understanding a (superficially) chaotic reality, we must first simplify it by creating formal systems on different levels of description, but also retain the capacity to break down these systems and create new ones. For example, we know that humans are made up of cells. We also know that we are made up of atoms, and as such, of pure energy. Nature invites to so many levels of description that we have to sift through them to find those that we need to complete the tasks we have selected. This is called intelligence.

There are also other philosophies that draw on parts of Zen: for example, Tao views contradictory pairs such as right/wrong or one/zero (the smallest building blocks of information) as holy entities, and focuses on finding the "golden mean" between them (the archetype is Yin and Yang, a kind of original contradictory pair). Our Western concept of thesis-antithesis-synthesis also belongs to this group. The strength - and weakness - in these approaches are that they instill in their followers a belief that moderation is always best, which can be both true and false according to Zen (depending on how you view it). All such attempts to force reality into formal systems are of course interesting, but definitely temporary and constantly subject to adaptation. Another philosophical system using this mode of thought was the pre-Christian Gnosticism, where the original opposites are God and Matter. These become intertwined within a sequence of Aeons (ages of time, imaginary worlds, or divine beings). Gnosticism probably originates (in turn) from an old Persian religion called Parsism, created by the well-known philosopher Zarathustra, who initially claimed that the world was based on such opposites.

Zen's way of thinking is partially a confirmation of the so-called nihilistic view of reality, in which objective truth does not exist, and partially a denial of it: it is simply a matter of point-of-view. Objective truth exists inside formal systems, whereas outside them, it does not. By breaking out of a formal system in which reality is described in terms of right and wrong, or intermediate terms such as more right than wrong, one finds a part of the core of intelligence. Being intelligent means being able to build an ordered system out of chaos, and thoroughly enough to be able to view one's own system from the inside and adjust one's own thoughts according to its rules. AI research has - in an amazing fashion - shown that this ability is completely vital to any intelligent operation whatsoever.

The difference between the real world and the one pictured inside the formal system of one's own creation has ruffled the feathers of such grandfathers of philosophy as Plato, Kant, and Schopenhauer. It has made them decide, after languishing analysis, that the real world is defective and incapable of approaching their own perfect, mathematical world of ideas. (Please note my mild insolence; as a 24-year-old layman I shouldn't be able to claim the right to even speak of these great philosophers. The alert reader would notice that I'm very busy questioning traditional authorities ;-). In science, this conflict is known as the subject-object controversy. Even in such "hard" sciences as physics this conflict has proved to be decisive, especially in Bell's Theorem (well-known among physicists), which has puzzled many a scientist. (I'm not going to go into the details of Bell's Theorem, but I'm employing it as a reference for those who are familiar with it).

When AI researchers sought the answer to the mystery of intelligence, they came into conflict with scientific paradigms. We need to use intelligence to understand intelligence. We need a blueprint for making blueprints; a theory of theoretical methods, a paradigm for building paradigms, etc. They found a paradox in which a formal system would be described in terms of another formal system. This is when they took Gödel's theorem to heart - a proof that all formal systems are paradoxical. The solution to the problem of creating a formal system for intelligence was self-reference, just like a neuron in the brain will change its way of processing information by - just that - processing information. The answer to intelligence wasn't tables, strict sets of rules, or mathematics. Intelligence wasn't mechanical. For intelligence to flourish, it would have to be partially unpredictable, contradictory, and flexible.

Many hackers and net-users are devoted Zen-philosophers, not least because many of the functions within computers and networks are fairly contradictory. The section of computer science concerned with AI is self-contradictory to the highest degree. Programming is also the art of creating order from an initially chaotic system of possible instructions, culminating in the finished product of a computer program. If this section has been hard to understand, please read it again; it is worth comprehending.

Humans as Machines - The Computer as a Divine Creation
Most hackers view people as advanced machinery, and there's really nothing wrong with this; it is simply a new way of looking at things, another point of view within the multi-facetted science of psychology. Hackers in general are futurists, and to them the machine (and thus the human) is something beautiful and vigorous. I'll willingly admit that to a certain extent I also view humans as machines, but I'd like to tone that statement down a bit by saying that we (like computers) are information processors - we are born with certain information coded in our genes, and in growing up we assimilate more and more information from our environment. The result is a complex mass of information that we refer to as an individual. The process by which information is handled and stored in the individual is known as intelligence. The individual also interacts with the environment by symbolically absorbing and emitting pieces of information, and thereby becomes a part of an even larger process, which is in itself intelligent. (If you're of a religious persuasion, this could be taken as an example of hubris) But what about the difference between computers and humans?

Two things: the computer knows who has created it, and human life is clearly time-limited. It has been proposed that the uniqueness of a human "soul" is a product of just these two factors, and that it's therefore only uncertainty and finitude that makes life "worth living". Of course, the theory could be challenged by proposing that its two premises are negotiable from a long-term perspective. Hereby the reader will have to draw his or her own metaphysical conclusions; the subject is virtually interminable, and the audience inexhaustible.

"I have seen things you humans can only dream of… Burning attack cruisers off the shoulder of Orion… I saw the C-rays glitter in the Tannhauser Gate… All these moments will now be lost in time, like tears in the rain."

(The android Roy Beatty in Ridley Scott's Blade Runner, understanding some of the meaning of life in his final moments)

By delving deep into psychology, the subject becomes simpler. An intelligent system, whether artificial or natural, must be checked against a surrounding system (what we might term a meta-system) in order to know the direction in which to develop itself. In an AI system designed to recognize characters, "rewards" and "punishments" are employed until the system learns how to correctly distinguish valid and invalid symbols. This requires two functions within the system: the ability to exchange information, and the ability to reflect on this exchange. In an AI system, this is a controlled, two-step sequence: first information is processed, then the process is reflected upon. In a person, the information processing (usually) takes place during the day, and the match against the "correct" pattern occurs at night, in the form of dreams in which the events are recollected and compared to our real motives (the subconscious). The similarity is striking.

Through this line of reasoning, we can draw the conclusion that people have an internal system for judging correct action against incorrect action. As if this wasn't enough, we also know that we can alter the plans by which we act - i.e., we are not forced to follow a specific path. In this sense, humans are just as paradoxical as any informal system, since we have the ability to break out of the system and re-evaluate our objectives. However, the great philosophers of psychology, Sigmund Freud and Carl Jung, found that there was a set of symbols and motives that were not subject to modification, but rather common to all persons. Freud spoke of the overriding drives, mainly the sexual and survival drives. Jung expanded the argument to encompass several archetypes, which referred to certain fundamental notions of what is good and what is evil.(3) These archetypal drives, which seem to exist in all animals, appears to be the engine that propels humans into the effort of exploring and trying to understand their environment.

This is the most fundamental difference between persons and machines. There is nothing that says that we should have to let intelligent machines be driven by the same urges as we are. Instead, we can equip them with a drive to solve the problems for which they were constructed. When the machine evaluates its own actions, it is then constantly driven towards doing our bidding. Isaac Asimov, the science-fiction writer, suggested such things in his robot novels through the concept of the laws of robotics, by which robots were driven by an almost pathological desire to please their human masters. This relationship is also found in the modern film Robocop, in which an android policeman is driven by his will to indiscriminately uphold the law.

Towards an Artificial Age - AI and Society
Aspects of AI is mirrored by the media of our time - Blade Runner is about the difference between man and machine, AI figures heavily in cyberpunk novels, music and film, and in 1995 the movie Frankenstein makes a comeback in the theaters. Coincidence? Hardly. An exciting example of this trend is Arnold Schwarzenegger's role as the robot in Terminator 2. In the film, the artificial intelligence holds human characteristics, as a result of being programmed by a human rebel instead of a brutal military force. It also touches upon aspects of the consequences of carelessly handling technology (as when Rabbi Löw lost control of his Golem). Of particular interest is the scene in which the robot, being machine, simply follows its programmed instructions to obliterate people standing in its way as opposed to finding peaceful solutions. The lead character, John (which incidentally happens to be a skilled hacker), discovers a dangerous "programming bug" in the robot's instruction set, which he corrects. The message of the film is that technology and AI are good things - if used properly and supervised by human agents. The real danger is people's ignorant nonchalance.

The Swedish movie Femte Generationen ("The Fifth Generation") again deserves being mentioned in this context. Fifth-generation computer systems are simply another name for artificially intelligent systems.

Lars Gustavsson makes a strong impression with his beautiful sci-fi novel, Det Sällsamma Djuret Från Norr ("The Strange Beast from the North"), which treats the metaphysical aspects of AI in a thorough and entertaining manner. His thoughts on decentralized intelligence are especially exciting, which suggest that a society of ants could be considered intelligent, whereas a single ant could not - and in this manner, all of humanity could be viewed as one cohesive, intelligent organism. This view is taken from sociology, which has become very important to AI research.

Flows of information are an indication of intelligence. This is confirmed in the model of society as a unitary sentient force. The intelligence of individuals and societies are undoubtedly related; the ability to store and process information through the construction and dissolution of formal systems is a sign of intelligence. Society is an organism, but at the same time it is not (yes, this is very Zen). These ideas go all the way back to the founder of sociology, Auguste Comte. I have myself coined the term superindividuals as a label for these macro-intelligences known as corporations, the market, the state, the capital, and so on. I will return to this subject further ahead.

Again, it is possible to emphasize the relatedness of chaos research and intelligence; intelligence can be seen on many different levels, each constituting a formal system in itself. One system is akin to another, and they form as strangely coherent pattern. Our intelligence seems to be united with our ability to enforce chaos.

Alan Turing and the Turing Test
Alan Turing was one of the very first people concerned with making machines intelligent. He proposed a test that could decide whether or not a system was intelligent - the so-called Turing Test. It consisted of placing a person in a room with a terminal that was either connected to a terminal controlled by another person, or to a computer that pretended to be a person. If the test subjects couldn't tell the difference between man and machine, i. e. that they couldn't make a correct judgment in half of the cases, the computer could be said to be intelligent.

This test was rather quickly subject to criticism by way of a theory called The Chinese Room. This entailed running the Turing Test in Chinese, with a Chinese-speaking person at one terminal and a person that didn't speak Chinese at the other. For the non-Chinese person to have a chance to answer the questions posed by the Chinese-speaker, he/she was to be presented with set of rules consisting of symbols, grammar, etc., through which sensible answers could be formulated without the subject knowing an ounce of Chinese. By simply performing lookups in tables and books it would seem like the person in fact spoke Chinese and was intelligent, although he or she was just following a set of rules. The little slave running back and forth, interpreting the Chinese-speaker's questions without knowing anything, was compared to the hardware of the computer, the machine. The books and the rules for responding constituted the software, or the computer program. In this way, it was argued that the computer couldn't be intelligent, but rather only capable of following given instructions.

However, it turned out that this objection was false. The one that the Chinese-speaker is communicating with is not solely the person sitting at the other end, but the entire system, including the terminal, books, rule sets, etc., that the poor stressed-out fellow in the other room used to formulate answers. Even if the person sitting at the other end of the line was not intelligent, the system as a whole was intelligent. The same goes for a computer: even if the machine or the program is not intelligent in itself, the entire system of machine + program very well could be. The case is the same for a human - a single neuron in the brain is not intelligent. Not even entire parts of the brain, or the brain itself, are intelligent, since they cannot communicate. The system of a person with both a body and a brain, however, can be intelligent!(4)

From this follows the slightly unpleasant realization that every intelligent system must constantly process information in order to stay intelligent. We have to accept sensory input and in some way respond to it to properly be called intelligent. A human without the ability to receive or express information is therefore not intelligent! A flow of information is an indication of the presence of intelligence. From this stems the concept of brain death - a human without intelligence is not a human.

We might finish this chapter by defining what intelligence really is (according to Walleij): Intelligence is the ability to create, within a seemingly chaotic flow of information, systems for the purpose of sorting and evaluating this flow, and at the same time incessantly revise and break down these systems in order to create new ones. (Note that this definition is paradoxical, since it describes the very process by which the author was able to formulate it. You can't win… :)


1. Probably a form of structuralism.

2. "Correct" is always a vague term in the field of philosophy. Don't take it too literally, and keep in mind that this is popular science...

3. Theories which are now out of favor with the established authorities. Oh well. Enimvero di no quasi pilas homines habent.

4. Or maybe not. It is impossible for a person to become intelligent without the society that surrounds her, and therefore it is the system of human + society that is intelligent… etc. etc


INDEX

Design and formatting by Daniel Arnrup/Voodoo Systems