Order this book
Grenoble Ecole de Management
Part I (From Cybernetics to Machinic Philosophy) is devoted to the conceptual framework of what he calls machinic life. John Johnston first gets back to the origins of cybernetics. Von-Neumann Self-Reproducing Automata, Ashby's Homeostat or Walter's Tortoises notably gave rise: 1) to a new vision of the complexity of machines (above a certain threshold of complexity, the description of the structure would be simpler than the description of the behaviour); 2) to a new interpretation of the integration of machines in their environment; and 3) to the first premises of situated robotics. But, above all, these works are at the origin of one of the most important concepts of modern science, first proposed by Ashby: self-organization, which is a pillar of complexity theory. In this context, cybernetics - which "offers [...] the framework on which all [possible] individual machines may be ordered, related and understood" (Ashby) -, quickly tend to blur the border between machines and living organisms; it could speak of machines in biological terms and it could speak of living organisms in terms of machines.
Chapters 2 (The In-Mixing of Machines: Cybernetics and Psychoanalysis) and 3 (Machinic Philosophy: Assemblages, Information, Chaotic Flow) develop Johnston's theoretical framework.
Jacques Lacan proposed a revision of Freud based on the three orders of experience which are: the symbolic, the imaginary and the real. According to him, consciousness is contingent to our senses and can be interpreted as a mere reflection, like a mountain at the surface of a lake. The reflection of consciousness occurs in an inner space of phenomenal images: that is what Lacan calls the imaginary order. The symbolic order begins with the circulation of speech and Lacan considers, according to cybernetics contribution, that symbolic realm has its own dynamic which has nothing to do with consciousness. Then "the machine is the structure detached from the activity of the subject. The symbolic world is the world of the machine".
Thanks to cybernetics, Lacan managed to isolate what governs the operation and role of the symbolic order. By showing the possibility of encoding sequences of symbols articulating presence with absence, he could interpret the specific logic that regulates human existence. He showed that symbolic order always exists in tension with the imaginary order. He realized that since many human activities are computational, the cybernetic machine does not only exist outside of us, and human being assumes an in-mixing of many kinds of information processing machines. He thus accepted the challenge that cybernetics posed to human boundaries.
Even though providing a fruitful interpretation, Lacan theory is insufficient to analyze these boundaries disturbances. Johnston needs "to go beyond the concept of symbolic as an abstract machine, which comes from mathematics and formal science, in order to input universality and deep structure to phenomena that seem neither wholly universal nor historical, neither wholly natural, nor cultural".
In order to go beyond Lacan in-mixing, Johnston then refers to Deleuze and Guattari who relocate subjects and machines on an expansive surface (the socius). Their theory of the assemblage links humans and machines in a concrete set-up of connections that ensures both the coding and decoding of flux of matter, energy and signs. Assemblages are not simply historical constructions. They are guided by an abstract machine (that DeLanda considers as equivalent to attractors or bifurcations). This leads Deleuze and Guattari to postulate the existence of a machinic phylum which cuts across the opposition between human and nonhuman, and suggests a conjunction between the organic and the nonorganic, a form of "life" that combines properties of both. The opening of the assemblage to non signifying flows of mater and energy leads Deleuze and Guattari to the notion of nonorganic life. In the machinic phylum, contrary to the hylomorphic model, matter appears to be active and exhibits this hidden kind of life. According to Johnston, computers and computational methods open a new window onto the machinic phylum.
Deleuze and Guattari elaborated what Johnston calls a machinic philosophy and he shows that an articulation of the central concept of this philosophy (assemblage) with complexity theory provides the assemblage missing causal framework explaining assemblages' evolutions. By interpreting Deleuze and Guattari theory in the light of computational assemblage, Johnston proposes "to define and situate different kinds of information machines and their discourses". New forms of Artificial Life (ALife) can then be interpreted in this extended vision of assemblage theory and nonorganic life.
The second part (Machinic Life) is devoted to a broad vision of ALife. First chapter (Vital Cells: Cellular Automata, Artificial Life, Autopoiesis) demonstrates the deep evolutions generated by ALife. According to Kant, "A machine possesses only motive force, not formative force. A product of nature is an organized being that has within it formative force, and a formative force that this being imparts to the kinds of matter that lack it (thereby organizing them). What all machines thus lack is natural purpose, machines exist only for the sake of the other". Works on cellular automata, the first steps of ALife (notably with the underline of the importance of the recursive generation), the link between life, self-organization and the "edge of chaos", and the emergence of the concept of autopoiesis, all these evolutions contributed to the apparition of self-organized and self-reproducing machines. In this context, ALife and autopoieis lead to a double inversion: non organic machines become quasi organic and organisms become autopoietic machines. With this new extension of the machinic phylum, the opposition between machines and organisms has become "a nexus from which new conceptual possibilities and technologies are rapidly emerging". The following chapter (Digital Evolution and the Emergence of Complexity) deals with those fascinating virtual universes populated with evolving creatures, like Tierra, Avida or Amoeba which were so impressive that Thomas Ray, the creator of Tierra, wrote a provocative paper entitled "How I Created Life in a Virtual Universe". Johnson shows in this chapter that despite real success, these universes proved to quickly reach their limits, notably because of their inability to embed open-ended evolution. The next generation of virtual universes, like Taylor's Cosmos, try to go further by unifying the universe content and the creature composition, attempting to obtain Pattee's semantic closure, but a lot of works remain to be done before obtaining significant results and new directions in ALife research (like Ackley ccr or Rasmussen wet ALife) are to be promoted.
The third part (Machinic Intelligence) is devoted to Artificial intelligence (AI). The first chapter (The Decoded Couple: Artificial intelligence and Cognitive Science) tries to show the community of AI and cognitive science. Johnston - through a well built brief history and numerous examples - shows that in an evolving technical environment, AI (with the jerky relations between "classical" symbolic AI and connectionism) and cognitive science jointly evolved during the last 50 years, and that this co-evolution can be interpreted as the consequence of their belonging to a unique driving force: the computational assemblage. The following chapter (The New AI: Behavior-Based Robotics: Autonomous Agents and Artificial Evolution) presents, on the base of cross-fertilization of cognitive science, ALife and AI, what Johnston calls "the new AI" which is structured by dynamic system theory. This new AI gathers multi-agents systems, like ants colony or swarms, and the new robotics symbolized by Rodney Brook works. The new AI is first of all based on embodiment and situatedness, so joining Varela cognitive science concept of enaction. Brook's subsumption architecture can then nicely be interpreted as a bottom-up design which gives robots their own Merkwelt. The new AI also integrates the contributions of collectivity. Since Hofstader who showed in "Gödel, Escher, Bach" the potential importance of collective intelligence, many works demonstrated that distribution and interactions are the key to emergent intelligent behaviours. These researches led to highly promising evolutionary robotics and swarm-bots which are collections of autonomous robots able to self-assemble and self-organize. Beside bottom-up nature method, top-down usual engineers method, the combination of top-down and bottom-up method, widely deployed in ALife simulations and which is at the heart of the new AI: "holds the greatest promise for producing machines of higher orders of complexity and consequently more advanced forms of machinic life". The last chapter (Learning from Neuroscience: New Prospects for Building Intelligent Machines) emphasizes the necessity to go further. Biologically inspired systems "have not taken off by themselves in the ways we have come to expect of biological systems" - noticed Rodney Brooks. According to him, it seems to be because we might actually be missing something in our models of biology; that's what Johnston calls "the missing stuff hypothesis". In this context, Johnston notices that some new research in AI seem to indirectly respond to Brook's doubt: "These initiatives foreground biological evolution of the human brain and the specific structure of the neocortex as the most salient facts to consider in understanding and building intelligent systems". The first initiative presented by Johnston is that of Eric Baum in "What is Thought?". Baum considers that semantic structure is inscribed in what he calls "compact code", since it embodies the constraints of the process it describes. Baum considers that the brain is made up of multiple evolved, then embodying compact code modules, organized hierarchically. Since they evolved, each module pursues its own objectives that is what it considers the most advantageous to our genes. Considering that we are very bad at writing compact code, Baum thinks that the best way should be to evolve modules, that is to write programs that can recursively write and evolve their own code. That's what he tries to do, with some preliminary success, with a system called Hayek. Johnston also considers Jeff Hawkins works. According to Hawkins, and very briefly put, a metric for intelligence is "how the brain remembers things and uses its memories to make predictions". For example, catching a ball is not a matter of computation, but is based on the recall of how to catch a ball which was learned over years of repeated practices. Building such an intelligent system requires 1) a set of senses that extract patterns from the world, 2) to attach to the senses a hierarchical memory system and 3) to train the memory system so that it can begin to build a model of the world as seen through its senses. Johnston considers that that's what Steve Grand tried to do with some success with the project "Lucy" (a try to obtain a robot achieving "emergent mammal-style intelligence"). This work demonstrated, according to Grand, the importance of internal model and self-representation. The works of Hod Lipson on self-modelling robots also demonstrate the necessity of a frequently updated own model through reviewing its sensory data. In this context of consciousness, communication and language are central. Steel's work in robot communication and evolution of language also recently demonstrated some possibilities of evolving language.
According to Johnston, all these new capacities are markers of a new threshold and: "With these newly complex and intelligent machines, the world becomes a different place, inhabited by new forms of complexity that are also new forms of life".
In conclusion, The Allure of Machinic Life is a must have book for everybody interested in ALife and can be of interest for social simulation researchers as well. It proposes a very well built synthesis of the field, presenting all the major works and nicely showing the unity and the diversity of the field. The theoretical framework is elegant and has the merit to propose a unifying interpretation of AI, cognitive science and ALife within the computational assemblage. Even though the analysis of ALife realizations at the light of the machinic phylum is not always convincing, because of the originality and the elegance of this framework, it deserves to be known and deepened.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2009