© Copyright JASSS

  JASSS logo ----

J.P. Marney and Heather F.E. Tarbert (2000)

Why do simulation? Towards a working epistemology for practitioners of the dark arts

Journal of Artificial Societies and Social Simulation vol. 3, no. 4,

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 15-Aug-00      Published: 31-Oct-00

* Abstract

The purpose of this paper is to argue for clarity of methodology in social science simulation. Simulation is now at a stage in the social sciences where it is important to be clear why simulation should be used and what it is intended to achieve. The paper goes on to discuss a particularly important source of opposition to simulation in the social sciences which arises from perceived threats to the orthodox hard-core. This is illustrated by way of a couple of case studies. The paper then goes on to discuss defences to standard criticisms of simulation and the various positive reasons for using simulation in preference to other methods of theorising in particular situations.

reciprocal altruism, group living, segmentation

* Introduction

Though the word epistemology is used in the title of this piece, it would have been preferable to use the word methodology, for that, more specifically, is what this article is concerned with. Unfortunately, by common usage, the word methodology has become somewhat debased and is often taken to mean a discussion about the technical aspects of the specific methods that were used to achieve particular research objectives. The older meaning of the word is subtler. It is 'a philosophical discourse on method'. In other words a discussion not just of how we did what we did, but also of why we did it in the first place, and, furthermore, of the way in which this contributes to the advancement of knowledge in the social sciences. As practitioners of a new and not very well established approach to the social sciences it behoves us to carefully consider some of these fundamental questions about the motivation of simulation exercises. More specifically, the really big question that we have to answer to the full satisfaction of those who do not use simulation or are antagonistic to simulation is, 'why use simulation - what advantages does it have over more conventional techniques'? It is hoped that this paper will make a small contribution to this discussion.

A basic assumption of the discussion is that simulationists tend not belong to the central orthodoxy of any of the main social sciences. This is derived on our experience as economists. In economics, simulation tends to be more prominent in the work of evolutionary economists than mainstream economists. Given the work of Kuhn (1962) and Ziman (1978) which argues for the structural similarities between the sociologies of different research paradigms, it seems reasonable to infer that the position of simulationists in other social sciences is broadly similar. Thus throughout the discussion there will be a basic dichotomy between 'the orthodox', who for their own very good reasons, are mainly concerned with defending 'the way things have normally been done' and the heterodox who are prepared to give a hearing to approaches which depart from traditional methods of enquiry, including of course, simulation. This should not be taken to mean that the heterodox fully endorse new approaches; rather it simply means that they are prepared to experiment with them. At a higher level, the differences between the orthodox and the heterodox on 'how research ought to be done' often translates into significant paradigm differences. Specifically, though the two groups may have much in common as practitioners of the same social science, there may also be quite marked divergence in the high-level core principles of either group. High-level core principles are beliefs and procedures which are used to organise and interpret theory, and to mapping theory on to empirical observation.

The question that arises then, and this forms the main theme of our subsequent discussion, is as follows. Is it possible to defend simulation, to the satisfaction of at least some of the orthodox, on proper methodological grounds? The short answer is, yes it is. Notwithstanding the mutual incomprehension which may exist between the two groups, it is possible to argue that the research efforts of the two are not necessarily incommensurate. Rather, they are taking place in different areas of the theoretical/observational edifice. If this point is granted, and we hope that readers will be persuaded that it should be granted on the basis of our subsequent argument, then it follows that there is nothing methodologically 'fishy' about simulation. In order to establish this point however, it is necessary to set out a framework which allows us to elaborate and analyse (in very basic terms) the nature of research. Having answered this question we then go on to consider more general reasons why simulation may offer significant benefits in social science research, and hence how simulation can conceivably be defended in methodological terms.

* A framework of research

This section is mainly concerned with the discussion of methodological principle. It is possible to skip it if you have no patience with epistemology and methodology and want to cut straight to the chase.

The framework which is adopted here is the one which is set out in Marney and Tarbert (1999). The model is mainly inspired by Ziman's (1978) account of how knowledge advances in science. However, it also makes use of Stewart's (1979) account of the hypothetico-deductive framework, Lakotos' (1970) concept of core organising beliefs and Kuhn's (1962) well-known work on the sociology of science and research paradigms. The main features of this model are as follows;

The research universe is split into two domains. The domain of constructs (c-domain) and the domain of protocols (p-domain). The former represents the world of theory while the latter represents the observable world. The purpose of research then is to map c-domain hypothetical statements onto p-domain real-world experience. C-domain mental constructs, (which may in ideal circumstances be logically exact), are used to provide organising principles for the purpose of interpreting the contents of the p-domain, which are vague and uncertain. C-domain mental constructs contain a number of statements with their own distinct order, determined by the requirements of deduction, with higher level statements which are more general and further from observability and lower level statements which are more specific and closer to observability. At the highest level would be found axioms or postulates, while at the lowest level one would find predictions or experimental statements.

Stewart (1979) envisages the process of establishing correspondence between the two taking place as a kind of idealised hypothetico-deductive algorithm. The investigator specifies which low level statements form his construct which are to be compared to which protocols, using the rules of correspondence. In other words, we might conceive of the process of the establishment of a theory as follows:

Figure 1
Figure 1.

We hope it is not doing a disservice to Stewart to characterise his hypothetico-deductive models as the application of a research algorithm. That is, the researcher
reasons upwards to higher-level statements. If his observations seem to be inconsistent with any of these higher-level statements, he may tentatively revise the statements concerned. Next, he reasons downwards again from these statements, (revised or not) until he arrives at further low level statements that can in turn be directly compared with the p-domain via rules of correspondence. From there, the whole process can start again At each circuit, the higher level statements of the theory are regarded as better substantiated than before-if they stand up to the assault from the facts-or they are revised as necessary. If, over a period of time, evidence builds up contrary to a number of the theory's high level statements, there may come a point at which the whole theory has to be regarded as untenable. (Stewart 1979, p.75).

Notwithstanding the usefulness of Stewart's framework, there are a number of problems with it which require the modification of Stewart's approach as a methodological basis.

Firstly, as Lakatos (1970) has pointed out, some high level statements are hard-core statements, and unless the paradigm (Kuhn 1962) is in crisis, they are not typically up for refutation. Hard-core statements are organising principles, and consist of metaphysical beliefs along with a positive and negative heuristic. In other words, the high level statements of a theory are by way of a system of belief, which help the theorist to make sense of the world. In the absence of the hard core, it is difficult, if not impossible for the theorist to interpret observation. Reasoning will not necessarily be up the way from low level correspondences. Often reasoning will be down the way in order to reconcile unexpected or anomalous results with the organising principles of the hard-core. Observation does not necessarily drive research progress to the extent that Stewart envisages because of the countervailing force of the hard-core. Thus, for example, hard-core downward reasoning is a particular feature of neoclassical economics (Marney and Tarbert 1999).

Secondly, it is necessary to point out, as Ziman (1978) does that the rules of correspondence which establish mappings between the two domains are intersubjectively agreed. Hence the collective collegiate set of shared values has a huge influence on what is perceived to be truth and non-truth. Research truths are not generally objective but inter-subjective.

Thirdly, Ziman (1978) argues that the strength of science lies in cross-mapping and chain-linking. The kind of correspondence outlined in figure 1 would only be one small part of a three-dimensional map of cross-correspondences in any well-established science. As he puts it himself:
The surveyor deliberately collects redundant data so that the locations of the main features of the map are over-determined. Analogously, scientific knowledge eventually becomes a web or network of laws, models, theoretical principles, formulae, hypotheses, interpretations etc., which are so closely woven together that the whole assembly is much stronger than any single element. By analogy, a scientific theory representing only a chain of logical or empirical consequences-for example, a sequence of 'causes' and 'effects'-is weak for validation and barren for prediction. The valuable 'map' characteristics to which we have referred would scarcely be evident in such a theory. Ziman (1978, p.82-84)

In summary our view of scientific progress is that it is the mapping of hypotheses onto observations. These kinds of mappings would be more readily expected in the case of low-level statements, less so in the case of high level statements. Although the development of the c-domain hypothetical chain and its mapping onto p-domain observations should in principle be driven purely by upward feedback from observation, in practice it will be heavily influenced by downward reasoning from the metaphysic of the core-belief system which provides the interpretative framework. Internal coherence may be just as important a driver of research as observational correspondence. Finally, a single mapping such as that illustrated in figure 1 does not do justice to the cross-mapped three dimensional nature of the typical mature research paradigm.

In the next section, we put forward the view that it is hard-core conflict which may well prove the most stubborn obstacle to the acceptance of simulation. Hence, we take some time to set out a couple of examples of hard-core conflicts. The main point that we wish to make is that it may be more productive to be clear about hard core conflicts from the outset.

* A couple of examples - application of our framework

Binmore's commentary on Axelrod

In a review of Axelrod (Axelrod 1997a), Binmore (1998) expresses his irritation on finding
...the jacket of Complexity of Cooperation congratulating Axelrod on his groundbreaking work in game theory, by which is usually meant his rediscovery of the fact that full cooperation can be sustained as an equilibrium in some indefinitely repeated games. But this subject had been well known for more than a quarter of a century before Axelrod began to write (Binmore 1998, p.1).

Binmore is prepared to concede that Axelrod is a pioneer in evolutionary equilibrium selection. However, he is critical of the fact that Axelrod resolutely ignores game theory commentary on his work and of his unwillingness to see what theory can do before resorting to complicated computer simulations. Furthermore, Binmore states that, Axelrod "ignores the widespread criticism from game theorists.." (Binmore 1998, p.4) and does not acknowledge that "theory can shed light on matters that Axelrod treats entirely through computer simulation. To a game theorist, such a wilful refusal to learn any lesson from theory seems almost criminal. What is the point of running a complicated simulation to find some of the equilibria of a game when they can often be easily computed directly?" (Binmore 1998 p. 6).

Note that there is no compelling reason why Axelrod should necessarily approach the Prisoner's dilemma from the standpoint of orthodox game theory. Indeed it would appear to be a quite deliberate strategy on his part not to use the conventional approach. As he has stated himself:
Throughout the social sciences today, the dominant form of modelling is based on the rational choice paradigm. Game theory, in particular, is typically based on the assumption of rational choice. In my view, the reason for this dominance of the rational choice approach is not that scholars think it is realistic. Nor is game theory used solely because it offers good advice to a decision maker, since its unrealistic assumptions undermine much of its value as a basis for advice. The real advantage of the rational choice assumption is that it often allows deduction. (Axelrod 1997b)

That is, by contrast with orthodox game theory, he places much less central importance on standard neoclassical notions of the global rationality of agent-participants. Although this is not consistent with the core values of orthodox economics, it is perfectly consistent with Arthur's (1994) behavioural critique of neoclassical rationality.
The type of rationality assumed in economics-perfect, logical, deductive rationality-is extremely useful in generating solutions to theoretical problems. But it demands much of human behaviour, much more in fact than it can actually deliver. If one were to imagine the vast collection of decision problems economic agents might conceivably deal with as a sea or an ocean, with the easier problems on top and more complicated ones at increasing depth, then deductive rationality would describe human behaviour accurately only within a few feet of the surface. For example, the game tic-tac-toe is simple, and one can readily find a perfectly rational, minimax solution to it; but rational "solutions" are not found at the depth of checkers; and certainly not at the still modest depths of chess and go.... Modern psychology tells us that as humans we are only moderately good at deductive logic, and we only make moderate use of it. but we are superb at seeing or recognizing or matching patterns-behaviours that confer obvious evolutionary benefits. In problems of complication then, we look for patterns; and we simplify the problem by using these to construct temporary internal models or hypotheses or schemata to work with. (Arthur 1994, p.406)

Hence Axelrod's work may be seen as a natural extension of Arthur's critique in which the prisoner's dilemma is played out between boundedly rational inductive agents. The consistency of Axelrod's work with an alternative behavioural interpretation gives us the confidence to assert that Binmore may have swept Axelrod's work into his own hard-core organising principles without necessarily asking his permission. Thus, at the risk of being provocative, it could be argued that the motive force of Binmore's critique of Axelrod derives from Axelrod's implicit rejection of Binmore's hard core. Binmore's irritation with Axelrod may stem from the fact that Axelrod's work puts the hard-core metaphysic to the test, in a way that would never normally be countenanced by orthodox adherents of game theory. Granted that this is the case, and that it is a commonplace situation for simulationists to be in, how are they to cope with this dilemma? There are two possible responses. One way forward is to take the approach of Moss, another simulationist and commentator on Axelrod's work. Moss (1998) explicitly rejects the orthodox hardcore. With regard to neoclassical equilibrium models he has this to say. "Whether or not the best attained results from selective imitation are as good as those implied by rational choice theory or global search methods such as genetic algorithms, it will still be a more efficient and rewarding mechanism in conditions where equilibria do not in practice prevail - the world we live in, for example." (Moss 1998, p.10). Furthermore, with respect specifically to examples which he cites of this kind of theorising, namely, Caballe (1998) and Binmore, Piccione and Samuelson (1997), he states "Both are analytical artefacts with no useful real world referent." (Moss 1998, p.9 of 15).

Thus, for those who are inclined to pursue this route, the first option for the simulationist is the wholesale rejection of the orthodox. In the specific example of Axelrod and Binmore, the case might be stated as follows. As Binmore himself admits, the alternative approach greatly simplifies equilibrium selection. If in real life, individuals are more inductive and bounded and less globally rational, than is typically assumed in orthodox game theory, it may be that the infinitude of possible equilibria which is found in classic game theory isn't a 'real' problem. That is, if similar lower-level correspondences can be achieved without resorting to an elaborate structure of high level unobservables (well-defined mathematically tractable utility function, global rationality, backwards induction etc.) then perhaps the more straightforward inductionist approach wins by Occam's razor. Hence, it is possible to argue (pace Binmore) that simulation may allow Axelrod to cut through over-elaborate theory while generating similar results.

This first approach, however, could be regarded as overly confrontational and excessively conducive to unproductive conflict. Let us now qualify what we argued in the previous section It is not seriously our intention to offer a sweeping rejection of Binmore and the orthodox game theorists, a body of individuals for whom we have great respect. Consider a second approach - an argument in favour of the benefits of redundancy and cross-mapping in research paradigms. We might observe that a different approach to a familiar problem may yield new and unexpected insights. As Binmore himself remarks of Axelrod, "he did us the service of focusing our attention on the importance of evolution in selecting an equilibrium from the infinitude of possibilities whose existence is demonstrated by the folk theorem." This is precisely because of the relative simplicity of his approach. In an evolutionary framework with bounded rationality, the set of potential equilibria become greatly simplified.

Thus, even if Axelrod and Binmore do not share hard-core organising principles, there may be cross-mapping research value in approaching the prisoner's dilemma from a different angle. Although this might appear to be redundancy, as Binmore argues, recall the quote from Ziman. As he maintained, redundancy is a necessary part of cross-mapping and research progress. Furthermore, as Latané points out, with respect to computer simulation, "it is often helpful to state something in a different form." (1996, p.2). Furthermore, as he observes, the addition of simulation to theory and empirical testing is commonplace in other subjects. "Physicists sometimes think of science as a stool with three legs-theory, simulation and experimentation-each helping us to understand and interpret the others". (1996, p.1). Thus redundancy, as such, is not necessarily a criticism of simulation.

We have deliberately stated our argument in an extreme form in order to get the point across, that a major reason for hostility to simulation is that knowingly, or unknowingly, simulationists may deviate too much from the orthodox hard core, and this may in turn antagonise the orthodox. The simulationist and the non-simulationist may have very different ideas of what constitutes core research values. Furthermore, one of the main sources of potential antagonism is that simulationist may be perceived to be putting orthodox core-values to the test or is in some other way undermining orthodox core-values.

For those who remain to be convinced of the importance of conflicts over core-values, we would point out that the insight was garnered from the years of reflection which came out of one of your author's very painful experience of doing a simulation-based Ph.D. in economics with an Oxford based professor, very much of the mainstream. In the next section, we present this experience as another example of core-values conflict between simulationist and non-simulationist. It would be futile to attempt to objectify this very personal experience. Hence in order to proceed with this discussion of it will be necessary for one of your authors to break a research taboo which was dinned into him as a young research student and this is to talk in subjective terms, and, even worse, in the first person.

* Example 2 - PhD Thesis - A computer simulation of Kaldor's growth laws

My PhD supervisor was very unsympathetic to my PhD simulation model (Marney 1996). I may have had him wrong, but I think he wanted a great deal more mathematical analysis and a great deal less simulation. More fundamentally, he was an economics purist - that is to say, for him, economic theorising took place in the realm of the purely logical. Any empirical validation took the form of translating lower level statements of the theory from logic into operational form, prior to subjecting it to empirical testing in the almost entirely separate realm of econometrics. The higher level statements were indisputable and given, and that was that. Because there was such hostility and antagonism between us, many of the issues were there in the background, but were never quite aired. But looking back I think some of the problems were as follows.

He wanted something which was absolutely consistent with the higher-level statements of neoclassical economics. Hence I should have provided rigorously argued logical links between my theoretical model and the core-belief system which informs economics, the Pareto system. As a neo-Keynesian he was prepared to concede that there may be good reasons why markets fail. However, the reason for market failure had to be exhaustively argued using the neoclassical framework. As a post-Keynesian, I found the whole idea of the literal truth of Pareto-optimality (either the absolute global optimality of resource allocation via the market system, or in the presence of market failure, through judicious market intervention) completely ludicrous and therefore it was not an issue for me. It was, I thought a useful metaphor, which had value in particular situations. But I could not accept the attitude that many economists appear to take, that to all intents and purposes, a Pareto-optimal allocation of resources is to be regarded as if it is literally attainable because it is impeccably internally logical. [1]

The core-value of Pareto optimality was not a particular priority. My main priority was a plausible model which generated visually recognisable patterns of growth. This was to be done, furthermore, without resorting to high-level behavioural concepts which were unobservable and which could not be given values from actual observation. Admittedly, there was the drawback that the model might lack undeniable links to the rarified orthodox world of high economic theory (though I did thoroughly argue for links between the various parts of my model and medium-level theory). On the other hand, though, the model was bolted together using a series of empirically observable relationships and parameters. Furthermore, it did generate the combination of stable GDP growth and structural change which is not easy to reproduce, and that was precisely what I was looking for. The point was that I was not seeking explanations in terms of very deep high-level theory. I was seeking explanations in terms of proximate causes. The exercise in itself involved some fearsome difficulties as it was without having to construct all sorts of tortuous links to the metaphysical world of the neoclassical economist. [1] No blame attaches to my supervisor. He was doing his job as he saw it. My difficulties arose from the fact that completely unwittingly, by simply not considering high level core-theory, I had implicitly mounted a challenge to orthodox theory.

In summary then, the two examples that have been discussed above suggests a reason why simulation often attracts a great deal of hostility. It is that the simulation approach is often associated with distinctively different paradigms such as artificial life and evolutionary approaches, which may be perceived as a threat to the hard-core by more orthodox practitioners. Potential hostility from defenders of the hard-core then is a primary problem for simulationists. We consider defences to such attack along with positive methodological principles underpinning simulation in the next section.

* The Methodology and Epistemology of Simulation

Attacks and Defences

Unorthodoxy - deviance from the hard-core

First and foremost, as the previous sections have brought out, the simulationist needs to be aware that what they are doing may be perceived as a hard-core challenge. The two possible ways forward are as follows; a) to very carefully justify why the hard-core theory may not necessarily be correct. Thus the case is made by Marney and Tarbert (1999) that the work of Arthur (1994) and Olson (1996) makes it possible to at least consider evolutionary models which are not motivated or structured by Pareto optimality and global rationality. b) A more conciliatory approach is to justify it on the basis of Ziman's cross-mapping principle. If the hard-core is robust enough, it will re-emerge as a way-point on the research landscape even if a completely different route is taken. Redundancy is not a criticism of simulation. All science benefits from cross-mapping. This is a stricture which applies doubly to social science where the correspondences between the real world and the theoretical world are often extremely shaky by the standards of orthodox science.
'It's a fix' - accusations of manipulation

Another area where the simulationist has to tread warily is when their work is subjected to the criticism of ad-hocery. That is, the assumptions necessary to get the simulation 'up and running' are unusual or unrealistic, and the results are contrived and are merely the product of the dark manipulative arts of the simulationist. They are, in short, a simulation artefact. A reasoned response to this kind of criticism is that the simulation is no more or no less of an artefact than any other kind of mathematical or verbal model. All models are to some extent an artefact. All modelling exercises, whether simulations or not, involve the exploration of abstract entities in a metaphorical world to see if it can tell us anything about the real world. Indeed this is one of the main points of collegiate science: to use its value system to determine which metaphors are appropriate and meaningful for making progress in research and which are not.

In short then, simulation is just another means of manipulating the abstract and the metaphorical. The manipulation of the abstract is a necessary part of any theoretical exploration, as is the making of assumptions which are not entirely realistic. Hence, the debate, more properly, should centre on whether the correct abstractions have been made, and whether the metaphors used are appropriate and useful, as they would for any other theoretical model. This kind of criticism from outside the simulation world might be easier to deal with if there were a set of generally agreed set of principles governing method in simulation, say, for example, along the lines suggested in Gilbert (1995) and Latané (1996). These include proper justification of the phenomena of interest and the modelling technique to be used, sensitivity analysis, robustness testing etc.

Benefits of simulations

As well as defences to criticisms of simulations, there are also a number of positive reasons why simulation may be better than alternative forms of modelling. The simulation approach can be argued to have the following advantages over other approaches.
Complexity - Complex behaviour from simple assumptions - and simple behaviour from complex assumptions

Arguably, there is no other way of modelling complex systems. This is a particularly strong reason and a well-known one for doing a simulation and really does not need rehearsing here. The point has been made, inter-alia, in Epstein and Axtell (1996, p.16-17, 51-53), Gilbert (1995), Goldspink (2000) and Latané (1996). Notwithstanding this well-known and valuable property of the simulation, simulations also lend themselves to complex modelling in a slightly different way. They allow for the multi-faceted or multi-dimensional modelling of agents in a way that is difficult or impossible using other approaches. This point is taken up below.

A related but slightly different point is that simulation allows modelling of aggregate social outcomes from heterogeneous individual behaviour. In economics, for example, the tendency has been to generate macro-level aggregates by assuming that individual behaviour conforms to that typified by the representative agent. This approach is exemplified in the well-known advanced graduate text, Blanchard and Fischer (1989). Hence aggregate behaviour is merely the outcome of the summation of the behaviours of n numbers of homogeneous representative agents. The relaxation of the assumption of the representative agent, which is possible only in computer simulation has potentially far reaching consequences. For example, Arthur et al. (1996) argue that the Efficient Markets Hypothesis, a very central proposition in financial economics, does not necessarily follow in models of share price formation, once one drops the assumption of homogeneous investors. Rational and efficient equilibrium is much harder to establish because investors no longer have a common objective model of expectations formation, and no way of anticipating other agent's expectations of dividends, and intelligent agents cannot form expectations in a determinate deductive way. Thus, heterogeneity may allow the reconciliation of the EMH with market psychology and share price bubbles. The basic argument is that the EMH regime is only one possible state of the market. Under slightly different conditions, the actions of heterogeneous traders trying to second-guess each other will shift the market into a psychology-driven regime.

The remarkable thing, and this is the main point to emphasise, is that coherent, coordinated, relatively straightforward macro behaviour is still a relatively robust feature of this and many other social simulations, notwithstanding the fact that agents are following different local rules. The phenomenon of simple behaviour from complex assumptions is a feature of emergence and self-organisation which does not receive the same amount of attention as complex behaviour from simple assumptions but is arguably just as important a feature of self-organising behaviour.
Simulation allows better exploration of dynamics

In social science, the dynamics of social interaction are often more significant than the end-points or equilibria of the underlying behavioural processes. Demographics for example, would have little to say if population growth was static and the structure of the population was constant. The ability to 'run' through social dynamics and present the output as time series or computer animations are regarded as valuable properties in themselves by simulationists. Thus, as LeBaron, Arthur and Palmer (1997) comment,
we are not interested simply in equilibrium selection, and convergence properties alone. We are interested in the behavior of the learning and adapting market per se. Most of the reason for this is that we have situations in which the market never really settles down to anything that we could specifically characterize as an equilibrium. (LeBaron, Arthur and Palmer 1997, p.1)

The particular methodological strength of the simulations in this respect is two-fold. Firstly, the output from simulation may better lend itself to c-domain/p-domain correspondence than other approaches, as it is relatively straightforward to compare actual time series, or actual visual information with simulated dynamics. Secondly, there is the ability to faithfully replicate real-life situations with continuous motion driven by continual disequilibria. For example, in Marney (1996) a fundamental criterion was to generate stable GDP growth within the correct parameter range concurrently with structural shift from industry to services in order to mimic what is actually observed. The idea was to simulate a characteristic property of the advanced economy: viz. that it is in continual transition.

More significantly, many social processes may be path or context dependent. Hence history is important, and simulation offers a convenient way of counter-factual experimentation with historical development and contextual background. With regard to their sugarscape model, Epstein and Axtell (1996) comment that:
Bottom up models such as Sugarscape suggest that certain cataclysmic events -like extinctions-can be brought on endogenously, without external shocks (like meteor impacts) through local interactions alone. Scientists have long been fascinated by the oscillations, intermittencies, and "punctuated equilibria" that are observed in real plant and animal populations. They have modelled these phenomena using "top-down" techniques of non-linear dynamical systems, in which aggregate variables are related through, say, differential equations. Yet we demonstrate that all these dynamics can be "grown" from the "bottom-up". And when they are conjoined with the processes of combat, cultural exchange, and disease transmission, a vast panoply of "possible histories", including our proto-history, is realized on the sugarscape. (1996, p.8-9)
'Realism' - visual iso-morphism and other iso-morphisms.

It is slightly misleading to talk about realism. However, simulationists often feel that their models are 'more realistic' than the alternatives. That is they can often generate patterns which are iso-morphic (usually visually) to the situation which is to be modelled. In some simulations of course, this is the whole point. Thus, the purpose of a flight simulator is to generate a visual (and aural and tactile) experience which is indistinguishable from actually flying a plane. Though other types of simulation are not to do with such authentic replication, they still tend to place a high research value on the conformity of output with visual or instrumental observation. Thus, for example, an objective of Epstein and Axtell (1996) is to generate demographics, income distributions and market price distributions which are realistic. Arthur et. al (1996) and Le Baron, Arthur and Palmer (1997) are concerned to generate the correct statistical time-series properties of share price data including leptokurtosis, low autocorrelation in the residuals and arch dependence in the residuals.

The same can also be said of simulation input. To take a specific case, a perceived benefit of using genetic programming to analyse share-price data in Fyfe, Marney and Tarbert (1999) and Marney et. al. (2000) was that it allowed a faithful rendering of how stock market traders are perceived to behave. The genetic program produces 'agents' with varying degrees of success with approaches to pattern-finding in share price data. That is, the GP can be seen as a system of virtual inductionist agents who adopt and discard rules of thumb in the light of their ability to make a profit from share trading. More conventional times series analysis of the data would have to take a more rigid pre-structured view of the kinds of rules that traders would be likely to adopt. Hence, it may be argued that the iso-morphism of this modelling exercise provides greater descriptive realism, in the sense that behavioural assumptions are not purely arbitrary and ad-hoc. Neither are they rigidly derived from the neoclassical hard-core of optimising behaviour, though a loose link is not ruled out.

More generally the iso-morphism of simulation allows for closer integration between the c-domain and p-domain. Hence simulation can be argued to allow for closer mapping between theory and observation than conventional theoretical approaches, and consequently the greater 'fidelity' of the paradigm map of the simulationist may ultimately lead to better prediction.
Simulation and holism - towards a social science of real people

It is not clear whether the traditional techniques of science, the relentless application of specialism, the division of labour, along with the experimental and deductive method has been as productive in the social sciences. Certainly, in many models in economics, one is left with the uneasy feeling that there is something quite eerily inhuman about robo-consumer as personified in that apotheosis of instrumental rationality, the utility maximising individual. (Indeed, for economists, a salutary example, which illustrates the distance of the modern view of economic man from normal human experience, is to be found in the bizarre behaviour of some colleagues who have taken rational economic man as a model for life and behave accordingly). This is not to say that individuals are not rational, or that they do not act in their own self-interest. However, the whole thing was put across in a much more holistic, rounded and qualified way in classic works such as Adam Smith's 1776 Wealth of Nations and Marshall (1920). Smith's self-interested individual was always set against their contextual social background. Marshall was always careful to warn against taking mathematical economics too far. Nevertheless, the present caricature of the hyper-rational agent is perhaps a (logically) necessary consequence of the relentless application of reductionism. If we want a model which is absolutely logically watertight, and mathematically tractable, and if we want to feel that we are making progress with the model, then we must increasingly pare away anything which is extraneous, till we are left with only a purely logic driven Frankenstein's monster. There is little left of complex, rich, multi-faceted, 'normal' human behaviour.

It is hoped that it is not being too obvious to make the point that man is inherently complex and holistic. The whole is not necessarily a sum of the various parts - economic man, sociology man, psychology man etc. Because of this holism, the application of relentless reductionism in social science will not necessarily lead to re-integration in the social sciences leading to a unified social sciences model in the way that happened in the physical sciences, when it was eventually possible for dialogue to take place on the basis of the same elementary model of the physical world.

Simulation offers a way of recreating holism and re-integrating social science as a counter to the necessary tendency to specialism and sectarianism (in a non-pejorative sense) which has resulted from the application of reductionism and the division of labour in social science research. Thus, for example, in Sugarscape, agents buy and sell, participate in cultural exchange, transmit diseases, have children, and participate in violence, all at the same time. Without wishing to sound too grandiose, there is at least potential here for a general social sciences model which goes some way to reintegrating some of the contributions of the various social sciences.

Furthermore, the holistic qualities of simulation could go some way to answering the three-valued logic problem posed by Ziman (1978). That is, much of the spectacular success of the physical sciences stems from the fact that the objects of study of physical science are capable of being placed in sharp categories. Sharp categories obey two-valued logic. The physical subject in question is either a member, or is not a member of the category in question, and is not a member of any other category. Thus, an electron is an electron and nothing else. Ziman argued that the techniques which have proved so successful in science, precise mathematical modelling and laboratory experiment require sharp categories based on binary logic. Unfortunately the logic of human behaviour is inherently three valued. The human subject can belong to one or more of a number of referent groups or be in any one or more of a number of psychological states. Hence explanation of observed human behaviour is often confounded by the difficulty of disentangling the complex influence of a multitude of possible influences on behavioural outcomes. For example, if a person votes Labour, is this because they are a parent, or because they are a charity worker, or a resident of an ex coal mining area, or because they perceive themself to be working class, (even though they have a middle class lifestyle)? Because of the three-value problem (yes, no or perhaps) progress in social science has been relatively slow.

There can be no guaranteeing it of course, but simulation holds out the prospect of finding a way round the three-value problem. Firstly, in simulation, it is possible in principle, for analysis to proceed, even if the agent in question potentially belongs to a number of social and psychological categories, and these interact in complex ways. The computer is able to animate models which would be difficult or impossible to comprehend using conventional analysis. Secondly, simulation also holds out the prospect of addressing the question of which categories of humanity and behavioural states are fundamental are irreducible at the level of the individual, and which are higher level and emergent as a result of social learning and social interaction. As a result, it may be possible to better organise social science categories. For example, belief in a deity or supernatural agency is a fundamental characteristic of most societies. No anthropologist ever found a tribe of rational atheists. The actual form that this takes, however, varies from society to society, and appears to be culturally and contextually dependent. As Blackmore (1999) points out, there are numerous instances of near-death experiences, in which a person has an experience of being drawn up to the light and having some kind of numinous experience. However, the god or the saints which are encountered by the subject are always those of their own religion, never those of others. Hence, belief in something may be fundamental and axiomatic, while the actual structure of the belief is higher level and emergent. Certainly, Epstein and Axtell (1996, chapter 3) have demonstrated how cultural transmission, including religion, may emerge as a result of simple behaviours, context and social dynamics.

In short then, in simulation, it is possible to preserve the holistic and multi-faceted aspects of individual behaviour while at the same time applying analytical methods. This makes possible rich behaviours which are consistent with common-sense observation, and which are less extreme or contrived than those found in many specialist models of behaviour. In addition, it may be possible to address the three-value problem which confounds social science by use of multi-category agents and by being able to distinguish between behavioural categories which are fundamental and those which are emergent.

* Summary and Conclusion

The purpose of this paper has been to make a case for methodological purposefulness in social sciences simulation. As a comparatively new and undeveloped discipline, simulation in the behavioural sciences is particularly vulnerable to attack from the 'orthodox' as it may be seen as an assault on the hard-core value system of the research paradigm in the various social sciences. The nature of these kinds of conflicts between simulationists and non-simulationists was illustrated by way of a couple of case-studies. However, it was argued that these conflicts need not arise if the implications of Ziman's (1978) mapping principles are observed. These are; i) that the purpose of science is to map constructs onto the observable universe; ii) there is merit in cross-mapping and redundancy. If we can generate the same result from different starting points we may have more confidence in its validity. Furthermore, there may be additional insights, short-cuts or simplifications gained from cross-mapping. iii) As it is difficult to fit social science into binary logic categorial schemes, mappings in behavioural science tend to be very shaky and point ii) applies with even greater force. Hence simulation can be argued to be a valid addition to social science which can be seen as a valuable 'third leg' which supplements and is complementary to the activities of empirical testing and theorising.

The paper went on to consider the kinds of issues which need to be addressed and the kind of benefits of simulation modelling which can be set out as positive virtues of the simulation approach by contrast with other theoretical approaches. Simulation is a superior technique and perhaps the only way of proceeding in the following situations.
  1. Where there are complex emergent global processes and dynamics from simple local behaviour.
  2. Where coordinated global outcomes are generated by the heterogeneous local decision rules.
  3. Where the representation of the unfolding of the dynamic process is an important part of the overall modeling process.
  4. Where, on the grounds of realism, it is desired to improve mapping iso-morphism in either the input or output of a simulation.
  5. Where it is desired that the characteristics of the behaviour being modeled encompasses holism.

Thus, the purpose of simulation is not to replace traditional social science. Rather, ideally, it is to take the study of behaviour in new directions, which have not previously been possible and to contribute to the relaxation of traditional obstacles to progress in the behavioural sciences.

* Notes

1 See Marney and Tarbert (1999) for a discussion of the status of Pareto optimality in economics.

2 The point of the simulation then was to determine whether the rather interesting observable patterns of GDP growth and sectoral change could be reproduced with my model. The interesting thing about long run GDP growth patterns for the better-off more developed countries is that after the initial period of industrialisation when GDP growth can be very rapid, GDP growth falls away and then remains constant at a level high enough to sustain significant improvements in living standards. In addition GDP growth rates are relatively constant even though the underlying economy which is generating the GDP is undergoing massive structural change. A well known transformation that takes place in almost all economies is employment and production concentrated in agriculture which then shrinks rapidly in relative (though not absolute terms) as industrialisation takes off. This is then followed by a period in which industry dominates production and employment, prior to a period of maturity in which the underlying composition of GDP and employment are shifting rapidly from industry to services (agriculture is by now insignificant).

* References

ARTHUR W.B. (1994) 'Complexity in economic theory', American Economic Review, v.84, no.2, pp. 406-411.

ARTHUR W.B., HOLLAND J., LEBARON R., PALMER R. and TAYLOR P. (1996), An artificial stock market, Mimeo, Santa Fe Institute, Santa Fe, California.

AXELROD R. (1997a) The Complexity of Cooperation. Princeton University Press, Princeton, NJ.

AXELROD R. (1997b) 'Advancing the Art of Simulation in the Social Sciences'. In In CONTE R., HEGSELMANN R. AND TERNA P. (1997) Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, Berlin.

BLANCHARD O.J. AND FISCHER S. (1989) Lectures in Macroeconomics MIT Press

BLACKMORE S. (1999) The Meme Machine. Oxford University Press.

BINMORE K. (1998) 'Review of 'The Complexity of Cooperation" by Axelrod', The Journal of Artificial Societies and Social Simulation, V.1 no.1, January https://www.jasss.org/1/1/review1.html

BINMORE, PICCIONE AND SAMUELSON, (1997) 'Bargaining between automata' In CONTE R., HEGSELMANN R. AND TERNA P. (1997) Lecture Notes in Economics and Mathematical Systems, Springer-Verlag, Berlin.

CABALLE J. (1998) 'Growth Effects of Taxation under Altruism and Low Elasticity of Intertemporal Substitution', The Economic Journal, January.

EPSTEIN J.M. AND AXTELL R. (1996) Growing artificial societies: social science from the bottom up. MIT Press.

FYFE C., MARNEY J.P., TARBERT H., (1999) 'Technical trading versus market efficiency - a genetic programming approach', Applied Financial Economics. v.9, p.183-191.

GILBERT N. (1995) 'Computer simulation of social processes', Social Research Update, issue 6, University of Surrey. http://www.soc.surrey.ac.uk/sru/SRU6.html

GOLDSPINK C. (2000) 'Modelling social systems as complex: Towards a social simulation meta-model', Journal of Artificial Societies and Social Simulation vol. 3, no. 2, https://www.jasss.org/3/2/1.html

HEY JOHN D., 1996, 'Economic Journal: Report of the Managing Editor', Royal Economic Society Newsletter, issue 92, January.

KUHN T. 1962. The Structure of Scientific Revolution University of Chicago Press.

LAKATOS I. (1970). 'Falsification and the Methodology of Scientific Research Programs'. In Lakatos I. and Musgrave A. (eds.), Criticism and the Growth of Knowledge (Cambridge: Cambridge University Press).

LATANÉ, B. (1996). 'Dynamic social impact: Robust predictions from simple theory. In HEGSELMANN R., MUELLER U., & TROITZSCH K. (Eds.), Modelling and simulation in the social sciences from the philosophy of science point of view. Kluwer, Dordrecht, The Netherlands.

LEBARON, B., ARTHUR B.A. AND PALMER R., (1997), Time series properties of an artificial stock market, Unpublished mimeo, Santa Fe.

MARSHALL A. (1920), Principles of Economics. 8th edition, Macmillan, London.

MARNEY J.P. (1996) An interpretation and revision of Kaldor's growth laws and a dynamic simulation of a neo-Kaldorian model. PhD thesis, the University of Paisley.

MARNEY J.P., TARBERT H. (1999) 'A scheme for comparing competing claims in economics', Journal of Interdisciplinary Economics, v.10, pp.3-29.

MARNEY J.P., MILLER D., FYFE C. AND TARBERT H. (2000) 'Technical trading versus market efficiency: a genetic- programming approach'. Paper presented at 'Computing in Economics and Finance (CEF 2000), Barcelona, 6th July 2000. http://enginy.upf.es/SCE/index2.html

MOSS S. (1998) 'Simulating Social Phenomena by Rosaria Conte et al. (eds.) a review essay by Scott Moss', Journal of Artificial Societies and Social Simulation Volume 1, Issue 2 March 1998 https://www.jasss.org/1/2/review1.html

OLSON M. (1996) 'Big bills left on the sidewalk: why some nations are rich and others poor', Distinguished lecture on economics and government, The Journal of Economic Perspectives, pp.3-24, v.10, no.2, Spring.

STEWART I. (1979) Reasoning and Method in Economics McGraw-Hill.

ZIMAN J. (1978) Reliable Knowledge, Cambridge University Press.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1999