© Copyright JASSS

JASSS logo ------

Generative Social Science: Studies in Agent-Based Computational Modeling (Princeton Studies in Complexity)

Epstein, J.
Princeton University Press: Princeton, NJ, 2007
ISBN 0691125473 (pb)

Order this book

Reviewed by Rosaria Conte
National Research Council, Institute of Cognitive Science and Technology, Rome, Italy

Cover of book

Introduction

During daily commuting, I usually feel obliged to concentrate on some paper or book grabbed from the desk just before leaving my office. Unfortunately, academic over-commitment prevents me from enjoying that invaluable pleasure, which is often associated to a perfectly void state of the mind. Under general conditions, however, such attacks of self-sacrifice are far from successful: by the time I reach home at night, little remains from this forced reading.

A couple of weeks ago, however, I was surprisingly absorbed in my reading when a sharp question got me back on earth: "Why do you read that?". To pose the question was a girl, about 4-years old, pointing to the book on my knees. "Because it is a pleasant reading", was my answer. The girl turned away from me, but for the rest of the ride I felt her furtive gaze, where incredulity was mixed with disgust: despite some colourful pictures, the book was probably too wordy for her taste. As to me, instead, I realized that the book had caught my attention to a rather unusual degree.

Epstein's Generative Social Science, that was the book, is to be regarded as a success. It is a highly professional book, comestible also by non-experts without giving up scientific rigour. Probably because the author is fond of its subject matter, and manages to transfer his enthusiasm into the reader, the book may be read all at once, as a narrative. All the chapters refer to previous studies by the author and his colleagues, but the preludes added to them succeed in transfusing the author's fondness for simulation and mathematical modelling into the reports, and thence into the reader.

Since the second chapter, we learn about the author's intellectual journey from music to mathematics, and from both to social simulation, in contemplation of simplicity and abstraction. Even the argument for generativeness, widely treated in the first part of the volume, is unfolded in a straightforward, quiet language, with a few and discreet hints to the author's biography and personal opinions.

Stylized social facts abounding throughout the book make us appreciate the potential of simulation for the study of the rise and fall of the Anasazi culture, the gradual stabilizing of retirement age norms, the emergence of precedence rules and of iniquity in resource distribution. Not to mention the studies of civil violence, transmission of contagion and the design of policies for containing epidemics spread by terrorist attacks. Finally, the author proceeds to the conquest of verticality: social organizations and hierarchies.

Too much? Not for this reviewer, who shares the vision that any social scientific issue may profit from the application of agent-based simulation modelling. Indeed, a far-reaching ambition adds value to the field of simulation.

In sum, there are good reasons to expect that the community of simulators will welcome this book with enthusiasm, and that other supporters will be recruited.

As this reviewer belongs to the simulators' community, the remarks and objections that follow come from within the field. Essentially, these concern the editor's general points about

In what follows, I will address each of these issues in turn.

About Generative Explanation

For Epstein, generating a social phenomenon by means of agent-based simulation requires to:

"situate an initial population of autonomous heterogeneous agents (see also Arthur) in a relevant special environment; allow them to interact according to simple local rules, and thereby generate - or 'grow' - the macroscopic regularity from the bottom up" (Epstein 1999, 41; italics are mine).

To fully appreciate the heuristic value of this new paradigm, we need to clarify what is meant by explanation. We will turn to this task below.

Generation and Causal Explanation

The idea that explaining phenomena has something to do with generating them is not new. Explanation (cf. Hall 2004[1]; but see also Gruene-Yanoff 2006) is often grounded on different types of causes, a subset of which goes back to Hume and his notion of producing causes. What is and how can we tell a producing cause?

"I find in the first place, that whatever objects are considered as causes or effects are contiguous; and that nothing can operate in a time or place, which is ever so little removed from those of its existence. Though distant objects may sometimes seem productive of each other, they are commonly found upon examination to be linked by a chain of causes, which are contiguous among themselves, and to the distant objects" (Hume 1739, 75).

For the purpose of the present discussion, what is interesting about this definition is the procedural nature of explanation: for the British philosopher, explaining a given event means to bridge the gap from producing causes to resulting effects, unfolding the "linked chain of causes" in between. Hence, the process called for by the philosopher is a sort of reverse engineering: reconstruct the whole chain from observed effect to remote causes. But how far back should one go to avoid ad hoc explanations? I will suggest that producing causes and their link to effects must be hypothesized independent of generation: rather than wondering "which are the sufficient conditions to generate a given effect?", the scientist should ask herself what is a general, convincing explanation, and only afterwards, she should translate it into a generative explanation.

To see why, let us turn to some classic social theories that never received a generative explanation. Occasionally, some dynamic theories have been translated into simulation models (cf. Simmel's theory of fashion in Pedone and Conte 2000). But this is not always the case. For example, the Witness Effect (WE, cf. Latané and Darley 1970), which describes large-scale, dynamic phenomena resulting from heterogeneous agents in interaction, has never been grown on a computer. Is the theory by Latané and Darley a good explanation? What would a generative variant of this theory be like? And what would be the value added of such a generative variant?

The WE occurs in social emergencies: whenever bystanders reach and overcome the magic number three, the probability of intervention to the victim's help has been found (Latané and Darley 1970) to drop dramatically. Actually, the probability of a stalemate increases with the number of bystanders. Why?

Latané and Darley account for the WE in terms of a majority rule, on the grounds of which agents monitor one another and receive inputs as to how interpret and react to (social) events. As three is the minimum number required for a majority to exist, it is also a critical threshold in the occurrence of the WE.

This elegant theory has received a great deal of theory-based confirmations (see the psycho-social literature on the influence of majority), as well as evidential support from both experiments and observations. Making an instructive exercise, let us ask ourselves how translate such a theory into an agent-based simulation model. The answer may be very easy: implement the majority rule as a simple local rule, and look at its effects. Now, such an answer would produce only a moderate interest even among the most fanatic supporters of simulation. Why?

Indeed, whereas the theory by Latané and Darley has the heuristic and innovative power of a good scientific explanation, its above-described simulation variant is completely ad hoc. Why, one might ask, the psychosocial theory is explanatory and the simulation model is ad hoc, if in both cases, the same explanandum - i.e. WE - is consequent to the same explanans - i.e. the majority rule?

Giving it a second thought, the explanans is not exactly the same. The psychosocial theory does not simply state that the majority rule produces the WE, but also that agents have a majority rule somehow operating in their minds! This is not hair-splitting: while looking for a causal theory, scientists do not content themselves with any producing factor. They look for an informative explanation, which incorporates additional understanding of the level of reality that the phenomena of study belong to. In our example, this means an explanation adding further understanding of social individuals. That a majority rule leads to WE under specified conditions tells us nothing new about agents' behaviours. On the contrary, that agents are governed by an internal majority rule, and consequently may interfere negatively with one another under specified conditions, is interesting news. This we learned from the work of Latané and Darley, independent of generative simulation.

Let us go back to Hume and his procedural characterization of generative explanation. According to his view, the causes of any given event are valid to the extent that the whole chain leading to the event is reconstructed. This type of explanation is feasible only if a sufficiently informative cause has already been singled out, i.e. if a theory already exists! Per se, the generative explanation does not tell you where one should stop in the reverse engineering starting from the event to be explained. Which producing event is sufficiently informative to provide a causal explanation? To state it with Hartmann: "There is no understanding of a process without a detailed understanding of the ... dynamic model. Curve fitting and adding more ad hoc terms simply doesn't do the job" (Hartmann 1996; italics are mine).

Generate and Reproduce

Consider natural experiments. Behaviours observed in laboratory are sometimes explained. But are they also generated?

In laboratory, independent variables are manipulated to observe their effects on the target phenomenon, and by this means to reproduce it. Most certainly, Epstein and Hume would convene that in laboratory one cannot unfold the whole chain of events from independent variable to observed effect, which is precisely what a generative explanation is supposed to do.

However if generating means to find out local rules allowing the phenomenon to occur[2], than generation is also allowed in laboratory. Certainly, experimental simulation is better than natural experiments at formulating and observing the effect of local rules. But it won't tell us if these are irrelevant, ad hoc, poorly informative, etc. Like a classic black box experiment, which tells us little about what went on between the manipulated variables and the effect observed, simulation per se tells us neither how far to proceed in the reverse engineering from the effect to interesting local rules, nor whether the algorithm produces the "linked chain" from causes to effects, or a simple shortcut! Where is then the difference between the two methodologies if they both accept as an explanation, the sufficient conditions for the effect to occur? Is this what is meant with growing? This time we believe Hume would divorce from Epstein: whereas for the Santa Fe scientist growing is finding the sufficient conditions for a given explanandum to occur, for the British philosopher remote causes are not explanations until all the intermediate factors from remote causes to the effects are reconstructed. And, I would add, unless those causes are formulated prior to and independent of generation. The capacity to provide a sufficient explanation is common to both simulation and laboratory experiments. What should make the difference is that only in the simulation one can dare reconstruct all the intermediate factors. This should be the value added of simulation: avoid ad hoc explanation!

From the previous discussion, we may draw two lessons:

Emergence

After a convincing survey of the debate around the definition of emergence in the complexity community, Epstein comes to the conclusion that this notion of emergence not only suffers from logical confusion and vague definitions, but is inherently useless.

Below, I will cast some doubts on this conclusion, and argue instead that a generative explanation of social phenomena requires an adequately defined notion of emergence.

Emergentism: Deistic Confusion or Theoretical Necessity?

With good reason, Epstein introduces his critical remarks on the notion of emergence by recalling the antiscientific definition that spread in the community of complexity at the beginning of the last century. According to Alexander's assertion (1920, cited by Epstein p. 32), emergent qualities admit "no explanation".

In this definition, Hempel and Oppenheim (1948, cited in Epstein, p. 32) found an "attitude of resignation" on the side of the complexity community. Moreover, they perceived a logical confusion in the assertion that "emergent (macro) properties cannot be deduced by lower (micro) ones" (Broad 1925) since it is propositions, as they say, not properties that can be deduced. But propositions are relative to theories, and falsifiable theories must specify the conditions under which they hold. Hence, if we reformulate correctly the abovementioned assertion, the only conclusion concerning propositions about properties at the macro and micro levels that emergentists are entitled to draw is a relativistic assertion. This states that under current theories, we are unable to deduce propositions concerning macro-social properties from propositions about micro-social properties. By this means, we have turned the deistic notion of "unexplainable", into one much more acceptable of "still unexplained". As Epstein argues, this reveals the intrinsic uselessness of the notion of emergence.

Despite the logical confusion pointed out by epistemologists, I believe there is a saltus from properties at different levels of social reality. We need theories accounting for it. More specifically, we must account for the intuition that, although macro-social phenomena are not incorporated into lower level entities - i.e. they are not represented in nor aimed at by them - they are necessarily implemented on them - take effect on the environment only through the action of local systems and their rules. The difference between these two types of interconnection among levels of social reality is not always perceived by social scientists, despite the eternal debate between individualists and holists.

The delta between incorporation and implementation of higher onto lower levels might render justice to emergentism: macro-social forces are implemented on local rules (as someone would put it, macro-social entities are granted no ontological autonomy), and to do so they must act upon local systems, modifying their rules. However, this process and the macro-social properties are not necessarily incorporated into local systems: they don't need to be mentioned within the rules on which they are implemented.

To go back to the witness effect, observing-others-before-acting is incorporated into local rules; instead, the stalemate, which is brought about by such a rule when bystanders exceed number three, is not. The stalemate is what one might want to call emergent.

Undoubtedly, once we have observed it and constructed some hypothesis to generate and observe it, an emergent effect can be explained, and even deduced, if we accept Epstein's idea that generating something implies deducing it. The question is how: should it be incorporated into the local rule? Not necessarily. For example, a majority rule generates a stalemate under interesting structural social conditions without incorporating it.

Unfortunately, as is formulated by Epstein in this book, a generative explanation is indifferent to this point: provided one starts from plausible rules, one may generate a given effect by an entirely ad hoc rule, i.e. simply by incorporating it. But what is a plausible and general set of rules, by means of which this and other (macro)social phenomena can be generated? Don't we need a theory of how establish (and generate) local rules?

In short, a theory of emergence has a twofold advantage. Based on a clear-cut distinction between incorporation and implementation, it can help preserve the generality and plausibility of local rules: there is no need for ad hoc local rules if macro-social properties are allowed to emerge from them in non-trivial ways (which include feedback loops). By no means this implies that scientific deduction is inherently impossible: all higher levels must be deducible from propositions concerning properties of the lower level of reality.

Furthermore, such a theory might contribute to a bidirectional view of the micro-macro link, including bottom-up and top-down processes. Top-down processes not only include second-order emergence (Dennet 1995; Gilbert 2002), but also immergent effects (Castelfranchi 1998a; 1998b). Second-order emergence consists of emergent effects retroacting on the lower level and getting partially incorporated into local representations. This is what Gilbert refers to when he models the second-order emergence of the Schelling's effect of segregation, in which agents become aware of this effect.

However, second-order emergence per se has not further effect. The loop becomes more interesting when the emergent effect retroacts on the lower level by causing new actions of the producing systems. This leads the emergent effect to immerge again in the agents' minds (Castelfranchi 1998a; 1998b; Conte et al. 2007) and give rise to new actions (for example, consider the mental ingredients responsible for norm conformity).

Unlike self-reinforcement, where the replication of a given behaviour increases with its past occurrence, in downward causation macro-social forces (e.g., institutions, authorities) retroact on local systems by getting them to act. This does not mean that local rules magically incorporate the effect (e.g. a stochastic distribution of agents' internal disposition to norm conformity), unless we take them as unexplainable givens. To understand this process we must assume that macro-social properties generate new properties at the local level.

Thoughtless Conformity

One example of ad hoc incorporation of macroscopic effects into local rules (for other examples, see Conte 2007) is the way in which rationality scientists, from Lewis to Young, conceive of social norms. As is the case with the rest of the book, Epstein provides a clear variant of this theory, and it is to this variant which I will refer below.

In classic rationality, social norms are characterized as self-reinforcing behavioural regularities. Epstein contributes to this view with his notion of thoughtless conformity. In the next section, I will come back to the thoughtless feature. Here, instead, I would like to draw attention on conformity.

Chapter 10 begins with examples of conventions that we daily happen to conform to. In most of these cases, conformity is not decision-based: when we get up in the morning, we never consider the possibility of going out naked. Analogously, when seated in our cars, we spend no time wondering which side we should drive on, whether left or right. People, concludes the author, blindly conform to the norm: the more they have done so in the past, the more they will redo it in the future. More precisely, according to Epstein, agents learn not only which norms to conform to, but also how much they should think about them.

I think there is some truth in what the author says. As we shall see in the next section, under given conditions behaviours are routinized, and probably certain norms become behavioural routines.

My problem concerns what is meant by thought. As Epstein's argument goes, once selection has been accomplished, the self-reinforcement process starts and agents gradually learn to think about the norm always the less. Here, I take "think" to signify "select". Where does this meaning come from?

Let us go back to the theory of norms from which Epstein draws inspiration. As he acknowledges, this goes back to Lewis's theory of conventions, which are defined as arbitrary solutions to problems of coordination presenting multiple equivalent equilibria (Lewis 1969). In the context of coordination games, the emergence of a convention is reduced to a problem of selection among alternative equilibria, since all of them are by definition equivalent.

Within the strict boundaries of arbitrary conventions, this definition may be found adequate (although, it may be objected[3] that it gives no account of the mandatory character of conventions). But can such a view be extended to norms at large? Are we certain that the main theoretical problem about norms and normative behaviours is, which norms do agents pick out? For a generative scientist, this appears a rather strange and late conversion to a static, equilibrium-oriented view of social phenomena. What about the out-of-equilibrium part? What about the why and how agents take norms into any consideration before conforming to them? Is norm conformity a disposition, a motivation that we should take for granted[4]? Is it a feature hardwired, incorporated in the so-called local rules?

Emphasis on selection is a logical consequence of the reference theory: reduced to conventions, norms are but arbitrary solutions to problems of coordination, i.e. one of multiple equivalent alternatives. The game-theoretic framework predetermines the research question. The issue to explain is which solution to coordination problems is selected: that is what agents initially think about. On the contrary, whether to conform at all is given for granted: once any solution has been singled out, conformity comes as an obvious consequence. It is incorporated into the local systems. But once a solution has been picked out, there is no more need for thinking: thanks to self-reinforcement, local systems will cease to select and think. They will have learned to conform thoughtlessly.

Even within the boundaries of such a strange conceptualization of thought, an urgent question arises here: are norms always solutions to coordination problems? What about Pareto-suboptimal norms and norms that do not imply multiple equivalent equilibria?

Suppose for a moment that norms cease to be defined only in terms of coordination problems, and starts to conceive of them as prescriptions to be complied with. In such a perspective, which is neither new nor original, the main question to be asked will no longer be which solution to select but whether and why autonomous agents conform to them. Rather than being incorporated as such in the local rules, we will have to generate conformity, as we try to do with norms.

Local Rules

In the conclusions, Epstein apologizes for a number of omissions, including learning and updating local rules. Hence, although he would hardly subscribe to a prescriptive view of norms (as almost any social scientist committed to rational action theory), I bet he would agree with the last statement.

In my vision, the problem with local rules is not only how they change, but also what they are. Why not to speak about agents rather than "local rules"? What about representations, attitudes, strategies, actions, motivations, and the like? Certainly, at the moment, not many simulation models are based upon a complex architecture of the agent. But here is where the complexity paradigm proves arbitrary: why must any level of reality be defined as complex, except the mental? Why be ambitious when modelling the social, and sober to the verge of dullness when designing the mental? If we want, say, to distinguish norms from conventions, we must be audacious in designing the agent base.

An equivalent effort should be invested in differentiating between social influence and learning. The latter is usually meant to provide a better adaptation of the system to its environment and a more accurate state of its knowledge base. But this is not necessarily the case with social influence: as cultural evolution shows (see Henrich and Boyd 2001), success is not the only nor the most important factor in the spreading of artefacts, and occasionally, factors such as social prestige predict cultural transmission better than material success. How build up social influence?

Autonomous intelligent agents that are likely to exercise and not only accept social influence, including the pressure to conform to conventions, abide to norms, obey authorities, form organizations and institutions. How and why do they do so? How provide them with a capacity for filtering social influence, solving conflicts among external requests and commands, rejecting some of them, deciding to violate norms under specified conditions, adapting them to their own and to the global needs under others, innovating, etc.? If we address these challenging questions with the tools of simulation, we must be more demanding as to the agent model, and not content ourselves with a bunch of simple, at most plausible local rules.

Finally, why speak about thoughtless conformity, before specifying what thought is like? No-one would deny that behavioural patterns regularly associated to specified inputs often give rise to (semi)automatic answers and are implemented as reactive routines. But is this what Epstein has in mind when speaking about a thoughtless behaviour[5]? Which type of norms can be reduced to routines and applied thoughtlessly? Perhaps this can be done with conventions, especially since they are learned as "natural" rather than arbitrary: when learning their mother tongue, people do not realize its conventional source. One might say that only the convention itself is incorporated into the agent, not its source and formation.

With other types of norms, things are not so simple. Especially when norms prescribe/forbid not only specific actions (such as tax paying), but also states of the mind, like goals (we must want what is good ...), emotions ("Thou Shalt Not Desire Thy Neighbour's Wife"), feelings ("Thou Shalt Love Thy Neighbour As Thyself"), and even beliefs (dogmas). Furthermore, a norm may prescribe a very general type of action (e.g., reciprocity), for which no specific routine is available. Finally, norms may prescribe states of the world to be maintained or achieved (f.i., "keep your city clean"). In all of these cases, the addressee is supposed to act in observance of the norm, without receiving any given routine. She is expected to put her intelligence at the service of the norm, solving the problems and removing the obstacles to its satisfaction. In such cases, there seems to be no point in reducing norms to thoughtless routines.

On the contrary, there may be some advantage in "forgetting" the source of the norm. This happens when norms are internalized, i.e. transformed it into internal, endogenous drives. An interesting theoretical question is what are the advantages of different types and mechanisms of incorporation of norms and conventions: what is each of them good for? What is their respective effect? For example, can we envisage any advantage in norm violation, or is it always a social evil (on this point, see Conte et al. 1998; Castelfranchi et al. 1999)?

Probably, among other mechanisms we should consider also semi-autonomous behaviour, in which agents conform automatically under default conditions but are able to perceive and rapidly adjust to changing conditions, and eventually violate the norm. In most settings these mechanisms get mixed in real behaviour.

Whatever it means, a theory of norms as thoughtless conformity does not set us free from the task of working out a theory of thoughtful behaviour.

All considered, the book edited by Epstein is a successful endeavour to institutionalize agent-based social simulation as a fruitful, ambitious, important scientific enterprise.

As a member of the league of social simulators, I am convinced about the main points Epstein tries to formulate in the book. His effort will certainly do good to the field, as the volume in most of its part demonstrates the terrific potential of agent-based social simulation.

In a sense, this review is a plea for further ambition. Why don't take seriously the agent basis of simulation? Why don't take seriously the generative explanation, and suggest theory-driven agent models, rather than simple, ad hoc rules? Why don't take seriously out-of-equilibrium phenomena, and investigate the reasons and processes that lead to convergence in the case of social norms?

Epstein applies generative explanation only to bottom-up processes. Why? What about the way back? Are there inherent reasons to give it up, and if so which ones?

One reason, indeed, is there: there is no solid understanding of downward causation, yet. And, as Simon (1969) would have said, "What can we learn from simulating poorly understood systems?"


* Notes

1For Hall, explanation is either based on counterfactual dependence - i.e. explanandum being removed by removing explanans - or on a producing cause.

2"Ultimately, 'to explain' ... society ... is to identify rules of agent behaviour that account for those dynamics" (Dean et al. 1999, 201).

3In Lewis's terms, conventions are based upon reciprocally induced expectations, but the philosopher did not perceive the motivational side of expectations, on one hand, nor the rights and duties induced by them (for an explicit treatment, see Conte and Andrighetto 2007).

4Indeed, evolutionary psychologists formulate such a hypothesis, (Kelly and Stich 2007; Cosmides and Tooby 1992), but what they have in mind is a specific typology of norms (essentially, the norm of reciprocity), that our hunter-gatherer ancestors worked out as an answer to problems of adaptation in their environment.

5The answer is not obvious, because he could have referred to another interesting mental process, thanks to which norms get internalized. This means that the normative, exogenous source of any given motivation gets lost for any reason, but the process leading from this motivation to action is not thoughtless. When a norm, from exogenous factor has become an endogenous motivation to action, the only thing that is lost in one's memory is the source of one's goal.


* References

ALEXANDER (1920) Space, Time, and Deity. London: Macmillan.

BROAD CD (1925) The Mind and Its Place in Nature. London, Routledge, Kegan Paul.

CASTELFRANCHI C (1998a) Simulating with Cognitive Agents: The Importance of Cognitive Emergence. In Sichman et al., Multi-Agent Systems and Agent-Based Simulation. Springer Berlin/Heidelberg: 26-44.

CASTELFRANCHI C (1998b) Emergence and Cognition: Towards a Synthetic Paradigm. In Progress in Artificial Intelligence - IBERAMIA'98: Ibero-American Conference on AI, Lisbon, Portugal, Springer Berlin/Heidelberg: 464.

CASTELFRANCHI C, DIGNUM F, Jonker CM, Treur J (1999) Deliberative Normative Agents: Principles and Architecture. In Proc. of ATAL'99 (Agent Theories, Architectures, and Languages: 6th International Workshop), Springer Berlin/Heidelberg.

CONTE R (2007) From Simulation to Theory and Backward. In Squazzoni F. (Ed), Proceedings of the II EPOS Workshop, forthcoming.

CONTE R, ANDRIGHETTO G, CAMPENNI' M, PAOLUCCI M (2007) Emergence and Immergence in Complex Social Systems. AAAI Symposium, Washington, October 2007, in preparation.

CONTE R, CASTELFRANCHI C, DIGNUM F (1998) Autonomous Norm Acceptance. In Proceedings of ATAL'98 (Agent Theories, Architectures, and Languages: 5th International Workshop), Springer Berlin/Heidelberg.

COSMIDES L and TOOBY J (1992) Cognitive Adaptations for Social Exchange. In Barkow J, Cosmides L, and Tooby J (Eds.), The Adapted Mind: Evolutionary Psychology and the generation of Culture. Oxford University Press: 163-228.

DEAN JS, Gumerman GJ , Epstein J, Axtell RL, Swedland AC, Parker MT and McCarrol S (1999) Understanding Anasazi Culture Change Through Agent Based Modeling. In Kohler TA and Gumerman GJ (Eds.) Dynamics In HumanAnd Primate Societies: Agent Based Modeling Of Social And Spatial Processes, Oxford University Press, New York and Oxford. DENNET D (1995) Consciousness Revisited, Penguin Books.

EPSTEIN JM (1999) Agent-Based Computational Models and Generative Social Science. Complexity, Vol. 4(5): 41-60.

GILBERT N (2002) Varieties of Emergence. In Workshop on Agent 2002 Social Agents: Ecology, Exchange, and Evolution Conference, October 11-12, 2002.

GRUENE-YANOFF T (2006) Generative Science, And Its Explanatory Claims. In Models and Simulations, Paris: http://philsci-archive.pitt.edu/archive/00002785/.

HALL N (2004) Two Concepts of Causation. In Collins J, Hall N, and Paul LA (Eds.), Causation and Counterfactuals. Cambridge: The MIT Press.

HARTMAN S (1996) The world as a process. In Hegselmann R et al. (Eds.), Modelling and Simulation in the Social Sciences from a Philosophy of Science Point of View, Kluwer, Dordrecht, Boston.

HEMPEL CG and OPPENHEIM P (1948) Studies in the Logic of Explanation. Philosophy of Science, 15(2): 135-175.

HENRICH T and BOYD R (2001) On modeling cognition and culture. Why cultural evolution does not require replication of representations. Culture and Cognition.

HUME D (1739-40) A Treatise of Human Nature.

KELLY D and STICH S (2007) Two Theories About the Cognitive Architecture Underlying Morality. To appear in Carruthers P, Laurence S and Stich S (Eds.), The Innate Mind, Vol. III, Foundations and the Future.

LATANE' B and DARLEY J M (1970) The unresponsive bystander: Why doesn't he help?. Englewood Cliffs, NJ: Prentice Hall.

LEWIS D (1969) Convention: A Philosophical Study. Blackwell, London.

PEDONE R and CONTE R (2000) The Simmel Effect: Imitation and Avoidance in Social Hierarchies. In Scott M, Davidsson P (Eds.), Multi-Agent-Based Simulation: Second International Workshop, MABS 2000, Boston, MA, USA, Springer Berlin.

SIMON H (1963) The Sciences of the Artificial, MIT Press, 3rd edition, 1993.

-------

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2007