Order this book
Department of Sociology and Balliol College, University of Oxford.
The dustjacket blurb of this distinctive and important edited collection states the questions it addresses and explains how they came to be posed in their present form:
"The past decade has witnessed an explosion of interest and new results on what were once considered timeless philosophical puzzles: Is it rational to be moral? Do independent agents need coercion to cooperate?"
"What created this surge of interest? Major books by Gauthier in 1986 and Axelrod in 1984 addressed these questions by means of models of social interaction - such as the Prisoner's Dilemma."
This blurb effectively summarises the aim of the book and explains its dichotomous nature as part philosophy, part social simulation (and occasionally both well-integrated). Most readers of this journal will certainly be familiar with the Axelrod book (1984), but perhaps less aware of that by Gauthier (1986). Essays inspired by Gauthier's work address the problem of defining instrumental rationality in such a way that morality may be reducible to rationality, which may or may not be of interest for those interested in social simulation.
Certain of the essays in this volume can unequivocally be said to be "must reads" for anyone interested in the simulation of interpersonal interaction, especially if one is a Prisoner's Dilemma junkie or a follower of the "Axelrod industry": the analysis of computer tournaments between strategies in Repeated Prisoner's Dilemmas. Other essays are by philosophers, concerning the nature of rationality and morality, many of which are by some of the most well known philosophers in their fields: Michael Bratman, Paul Churchland, David Gauthier, Edward McClennen, Brian Skyrms and Eliot Sober are among the more famous names. As such, the book is just as directed at high-level philosophical decision theory and moral philosophy as it is at those working on genetic algorithm models of learning in repeated mixed-motive games. As the journal for which this review is written is one about social simulation, however, I will focus on the contribution this volume makes to social simulation and to our understanding of cooperation in social dilemmas (N-person Prisoner's Dilemmas in the contributions by Glance and Huberman, and briefly by Routledge), repeated 2-person Prisoner's Dilemmas against different strategies (in essays by Burkholder, Danielson, LaCasse and Ross, Marinoff, Routledge, Sober and Talbott) and related two person repeated games (in essays by Skyrms and by Dosi, Marengo, Bassanini and Valente). In a brief discussion below, however, I will attempt to discuss the nature of some of the more philosophical writings and suggest how they may be of interest in advancing the simulation of social evolution.
Is this book for you? It depends. The extent of your interest in the particular simulations and mathematical models may suffice to warrant a purchase or thorough read. Alternatively, or in addition, if you are particularly interested in philosophical discussion of the nature of rationality and its relation to morality (the "Gauthier industry") you must take the related essays here seriously.
To understand the essays devoted to social simulation in this volume, it would be helpful to be familiar with Danielson's (worryingly under-rated and little known) book Artificial Morality: Virtuous Robots for Virtual Games (1992). In this book, Danielson invents a variety of possible strategies inspired by Gauthier's initial introduction of the idea of "constrained maximisation." Constrained maximization suggests (1) that the choices made by rational agents are over strategies, or dispositions, and not actions, and (2) that under such a model of deliberation, an agent should be a "conditional co-operator", that is, one who co-operates when that agent recognises another agent as a fellow conditional co-operator. Choice of such a disposition results in achievement of the Pareto-optimal outcome (C, C) in the Prisoner's Dilemma if one has identified a fellow "moral" agent and the (D, D) outcome if one encounters an "immoral" agent, or an unconditional defector. Danielson's innovation is to invent a number of different types of strategies with increased recognition capacity, that recognise different strategies and co-operate or defect according to their own strategy and the strategy of their opponent.
Danielson's own contribution to the present volume follows his own previous work, but improves upon it in important ways. Specifically, he shows what happens when strategies are able to borrow aspects of other successful strategies (akin to genetic crossover or recombination), as opposed to being driven out of the population as in traditional evolutionary game theory or Prisoner's Dilemma tournaments. His essay is an important contribution to the literature on genetic algorithms in simulation, improving upon Axelrod's (1987) paper with a similar purpose. The paper by Dosi et al. also uses the genetic algorithm to model learning in repeated games. Similar to Danielson's emphasis on the relative success of different strategies in Prisoner's Dilemma tournaments, Burkholder's essay further defends Axelrod's Theorem 1: namely, that "... (t)here is no uniquely most advantageous kind of agent or player to be, no dominant agent or strategy, independent of what the other agents are like." (p. 139). Given the "tit-for-tat bubble" noted by Binmore (1994, 1998), that is, the popularly held view that Tit-For-Tat has "solved" the Prisoner's Dilemma, reminders about the impossibility of "optimal" strategies are much needed.
A renowned philosopher of science, Eliot Sober, contributes an essay that addresses how play should be modelled in games, but adds a new twist that I recommend highly to the readers of this journal. Specifically, he discusses similarities and differences between rational deliberation and evolution. Whereas many followers of evolutionary theory are inclined to think of rationality as simply a kind of individual evolution, likening fitness to individual utility, Sober shows where this intuition is misguided. An interesting model of learning is also suggested in the paper by Dosi et al.
Talbott and LaCasse and Ross both offer interesting models characterising the kinds of deliberation over preferences which would be evolutionarily stable; that is, the kinds of preferences that ought to evolve rationally, even when it may be individually irrational to possess them. (This is the classic Schelling-type "rationally motivated irrationality".) If these two essays, which apply more directly to the modelling of game play, spark your interest, then the more philosophical discussions in the three articles by MacIntosh, Schmidtz and de Sousa will help to raise further questions about just what we mean by rationality and how or whether it relates to morality.
The paper by Marinoff is perhaps the most sophisticated in terms of complexity. He developed twenty strategies, among them a family of "maximizing" strategies, which test other players, updating their own strategies based on other player responses to 100 randomly generated moves in a repeated Prisoner's Dilemma of 1,000 rounds. The maximizing family does well against all other strategies except itself! Since any member of the family plays randomly for the first 100 rounds, they can't recognise themselves or formulate a reasonable response to random play and hence tend to get locked into mutual defection. While the paper is sophisticated and some results interesting, it is not until the end of the paper that any possible applications to human or animal behaviour are offered, and these applications, it must be said, are quite loosely coupled. Unfortunately, this volume is not completely without the "shoot first, ask questions later" method that sometimes plagues simulation: simulations are often technically interesting, but it is difficult to imagine them having any applications to a human or animal group because the initial conditions and other parameters are so specific that one can't see where they would arise naturally. However, this volume comparatively rarely suffers from this fault, and is to be lauded for its emphasis on simulating and modelling issues that are deduced from questions about individual rationality, in contrast to less individualist models such as those based on replicator dynamics in evolutionary game theory which concern population-based evolution (although these are certainly interesting and helpful models as well).
An excellent antidote to this problem is in the essay by Huberman and Glance. They borrow methods from statistical thermodynamics in which macro-level properties are reduced to the activity of lower level-units. (An analogy would be the attempt to explain wetness or solidity as a feature of water based on what is happening at the micro-level to the atoms in water molecules.) Specifically, they discuss how different types of expectations about co-operation affect individual propensities to co-operate and how these expectations lead to group level dynamics, much like Schelling's segregation models (1978).
The simulation models from Skyrms' chapter - which is itself is reprinted from his book (1996) - uses Aumann's idea of a correlated equilibrium to show how non-random pairings of players can lead to co-operation (Aumann 1987). Skyrms' chapter is highly recommended for both its intellectual and technical quality and especially for highlighting the rather obvious fact of non-random pairing in the real world: a fact unfriendly to the economist's "impersonal market" view of social life.
Two other papers are excellent, but don't quite fit the simulator's usual preference. One is a highly recommended paper by Routledge entitled "Economics of the Prisoner's Dilemma: A Background". This essay covers, concisely but thoroughly, just about all of the relevant results on the Prisoner's Dilemma from economics, as well as partial discussion of results from Prisoner's Dilemma tournaments, and N-Person Prisoner's Dilemmas. Also perhaps somewhat outside the purview of standard simulation interests is an experimental study by Kollock which has subjects playing a Prisoner's Dilemma and examining the interaction effects between group identification manipulation and individual value orientations. If one is interested to know whether what is simulated actually ever appears empirically, then more attention should be paid to experimental studies such as these, and the inclusion of this article in this volume is most welcome for that reason.
At the beginning of this review I stated that there were many essays concerning the definition of practical reason (also known as instrumental rationality) that were sufficiently philosophical to be uninteresting to a social simulator (namely, essays by Bratman, Churchland, Gauthier, Irvine, McClennen and de Sousa). Where the essays by MacIntosh, Talbott and LaCasse and Ross concern whether individual rationality can explain or justify moral behaviour, these essays concern whether or not we have reason to be instrumentally rational. Such a philosophical question, whether rationality can be self-defeating may seem uninteresting, intractable or otherwise esoteric, but I believe it is worthy of attention, not least because recent simulations have in fact attempted to model the interaction between a reasoning, planning agent in a changing environment. (See many of the papers by John Pollock on the OSCAR program at http://www.u.arizona.edu/~pollock/ ). However, let me attempt to briefly pique the reader's interest in this topic.
With the exception of Churchland's essay, the essays by Bratman, Gauthier, Irvine, McClennen and de Sousa basically discuss what gives humans a reflective reason to be rational. In other words, we can ask "does the theory of expected utility recommend the theory of expected utility as a theory of rationality one should follow?" If this is all too much, let's take a thought experiment which is discussed by Gauthier and Bratman. This is Kavka's (1983) "toxin puzzle". Here is how it is presented by Gauthier (p. 50) in the volume under review:
"Imagine an individual who is an exceptionally astute judge of the intentions of her fellows, and who ... is extremely wealthy. Bored with making yet more money, she devotes herself to the experimental study of intention and deliberation. She selects persons in good health ... and tells each of them that she will deposit $1,000,000 in his bank account at midnight, provided that at that time she believes that he intends, at 8 a.m. on the following morning, to drink a glass of a most unpleasant toxin whose well-known effect is to make the drinker quite violently ill for twenty-four hours ... Her reputation as a judge of intentions is such that you think it very likely that at midnight, she will believe that you intend to drink the toxin if and only if you do then so intend."
The problem here is as follows: if you want the money, you should form the intention to drink the toxin. However, you now know that if you are to form the intention and receive the money at midnight, come 8 a.m. the next morning, you will have no reason (on expected utility terms) to actually drink the toxin, as you already have the money. So can you form the intention to drink the toxin now, on expected utility reasoning? Bratman and Kavka both think not, Gauthier thinks so. Similar cases of expected utility being self-defeating occur everywhere in cases of individual and social rationality. This case is also much like the Prisoner's Dilemma situation where I would do better if I could form the intention to be conditionally co-operative, although expected utility reasoning tells me I should be an unconditional defector, resulting in my having only 2 instead of 3 utils. As the toxin puzzle is a version of Newcomb's problem, and Newcomb's problem is, in the view of most, an intra-personal case of the Prisoner's Dilemma, such questions are certainly of importance for anyone vexed by the (il-)logic of the Prisoner's Dilemma. (Irvine's paper discusses Newcomb'sproblem, the Prisoner's Dilemma and related "paradoxes" of rationality.) In sum, for those who are interested in the Prisoner's Dilemma because it produces results which seem counter-intuitive given our definition of maximising rationality, such philosophical exploration is hearty food-for-thought and may hopefully lead to further understanding and modelling of conditional cooperation.
An incidental criticism is that there is neither a subject nor a name index in the book. There also exist a few citation errors and missing pieces of text in footnotes, so perhaps a bit more editing was required.
However, these are minor distractions in a volume that contributes so powerfully to combining formal modelling through pure mathematics and computer simulation with questions in moral philosophy and decision theory. Anyone concerned with the inter-relationships between these fields should read the essays in this book, and anyone concerned with either subfield (simulation or decision theory as applied to moral philosophy) should make sure that they know whether or not they can do without it.
1 Many of the essays in Bicchieri, Jeffrey and Skyrms (1997) also address learning and evolution as applied to norm emergence in ways similar to models in this volume.
2 This is similar to Macy's work. See http://www.people.cornell.edu/ pages/mwm14/ for more details.
3 I recommend that anyone interested in the simulations discussed here visit the homepage of the Danielson headed Evolving Artificial Moral Ecologies group at http://eame.ethics.ubc.ca/ (Note that the link to the similators printed in Danielson's essay on page 439 does not take you to his simulators, which must be accessed from the homepage address above.) The EAME page contains a number of user-friendly on-line interactive simulators and related materials, including the simulators used in the essays in this volume by Danielson and Skyrms.
4 A decision theorist will be familiar with this question as being inspired by Newcomb's problem, which inspired a debate between two types of decision theory: causal (also known as expected utility) and evidential (as described by Irvine in the volume under review).
AUMANN R. 1987. Correlated equlibrium as an expression of bayesian rationality. Econometrica, 55:1-18.
AXELROD R. 1984. The Evolution of Cooperation, Basic Books, New York, NY.
AXELROD R. 1987. The evolution of strategies in the iterated Prisoner's Dilemma. In L. Davis, editor, Genetic Algorithms and Simulated Annealing. Morgan Kaufmann, Los Altos, CA.
BICCHIERI C., R. Jeffrey, and B. Skyrms, editors, 1997. The Dynamics of Norms, Cambridge University Press, Cambridge.
BINMORE K. 1994. Playing Fair: Game Theory and the Social Contract I. MIT Press, Cambridge, MA.
BINMORE K. 1998. Review of The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration by Robert Axelrod. Journal of Artificial Societies and Social Simulation, 1(1), <http://jasss.soc.surrey.ac .uk/1/1/review1.html>.
DANIELSON P. 1992. Artificial Morality: Virtuous Robots for Virtual Games. Routledge, London.
GAUTHIER P. 1986. Morals by Agreement. Oxford University Press, Oxford.
KAVKA G. S. 1983. The toxin puzzle. Analysis, 43:33-36.
SCHELLING T. 1978. Micromotives and Macrobehavior. W. W. Norton, New York, NY.
SKYRMS B. 1996. Evolution of the Social Contract. Cambridge University Press, Cambridge.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2000