© Copyright JASSS

  JASSS logo ----

Chris Goldspink (2002)

Methodological Implications Of Complex Systems Approaches to Sociality: Simulation as a foundation for knowledge

Journal of Artificial Societies and Social Simulation vol. 5, no. 1

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 1-Nov-2001      Accepted: 11-Jan-2002      Published: 31-Jan-2002

* Abstract

There is growing advocacy for the adoption of computational methods as a substitute for, or complement to, traditional research methods, particularly for examining social phenomena derivative of organised complexity. This paper examines some of the reasons for this advocacy and the specific advantages of the method for studying such phenomena. It considers also the limitations and problems that need to be addressed if the method is to gain wider acceptance. In joining in the advocacy of these techniques, a framework is proposed which can assist with the incorporation of computational techniques in a broader methodological mix. Such a mix has the potential to harness the strengths of the method while offsetting some of its weaknesses.

Complexity; Computer Simulation; Methodology; Social Research

* Introduction

Computer based simulation is increasingly argued to constitute an important method for studying phenomena arising from organised complexity. Advocates posit the method either as an alternative to traditional methods or as a complementary 'third scientific discipline' (Axelrod 1997, Ilgen & Hulin 2000). Certainly recent advances in computer capability and power have made possible approaches to research that were, until very recently, the preserve of a select few well funded research centres in fields like physics and meteorology. To a very large degree, the increased adoption of computational methods has been fuelled by, and has in turn fuelled, interest in the behaviour of non-linear systems. Phenomena arising from complex organisation and non-linear interaction are being found to be important in an increasing number of fields including the social, cognitive, behavioural and organisational sciences. The growing realisation that social phenomena, including economics (Arthur et al 1997, Ormorod 1995, 1998), organisations (Marion 1999), minds (Kennedy & Eberhart 2000) and other social systems (Eve et al 1997, Epstein & Axtell 1996) frequently demonstrate the characteristics typical of complex systems containing significant non- linearity, challenges existing method. Such systems display emergent behaviour i.e., behaviour not inherent in or predictable from a knowledge of their constituent parts (Holland 1998). Ilgen and Hulin state, for example, "our methods and theories remain far better suited for the deterministic and linear corners of [organisation science] than for the well populated chaotic regions of it." (2000: XV).

Despite this, researchers who adopt this 'third discipline' often report that they have experienced difficulty having research results accepted within traditional peer reviewed journals. Clearly for many questions remain about the legitimacy and acceptability of this approach to some areas of social research. In this paper reasons for the growth in interest in the method are considered along with the particular relevance of the method for understanding complex phenomena. Some of the concerns are also addressed and a framework for advancement of computational approaches towards a more mature methodology is suggested.

The Growing Interest in Simulation Method in the Social Sciences

There are no doubt many reasons for the growing interest in the potential of computational methods in the social sciences. One important reason is a concern in some fields about the robustness of existing methods particularly those based on relativistic post- modern assumptions. Another, not unrelated, is the growing recognition of the implications of non-linearity in systems operation and dynamics (Eve et al. 1997, Marion 1999) and an observation that much of what is interesting in social phenomena may be the result of distributed properties (Kennedy & Eberhart 2000). This observation calls into question traditional scientific methods, both positivist and post-positivist, in both the natural sciences and social sciences (Stewart 1990, Casti 1994, Holland 1998). A third reason is simply that, due to advances in technology, it is now possible to obtain the necessary computing power with a reasonable budget. Looking more closely at the first two reasons the following observations can be made.

With respect to the concern about the rigour of post-modern methods it should be noted that there is a long established opposition to the application of positivist (reductive) method in many social disciplines (in particular sociology and anthropology) and an established advocacy for 'anti-naturalistic' method (Lincoln & Guba 1985, Rosenau 1992, Baert 1998) on the basis that traditional methods lack applicability and relevance in many social contexts. For some though, the alternative to modernist method represents a retreat from science rather than a viable alternative (McKelvey 1997; Bookchin, 1995). This presents an ongoing tension within social science disciplines (Burrell and Morgan 1994). What is sought is an alternative which addresses the post-modernist concerns regarding the irreducibility of social phenomena without abandoning the ability to test alternative knowledge claims, if not for validity, then at least for relevance.

Turning to the complex character of many social phenomena, the analytical intractability of complex systems, clearly renders traditional reductive methods of little value. While mathematical and statistical modelling can assist with the understanding of the macro behaviour of such systems these methods are not well suited to understanding the process of emergence-how micro order gives rise to macro order (Casti 1994). Hence, they can aid with description of phenomena but not explanation.

These two factors combine and contribute to the growing interest in the application of computer simulation to social research.

Looking to more specific benefits, Gilbert states that "Simulation comes into its own when the phenomena to be studied is either not directly accessible or difficult to observe directly." (1996: 2). Compared with standard research approaches which involve observation of, or experimentation with, people within organisational contexts, Prietula et al also note that: "computational models are generally less noisy, easier to control, more flexible, more objective, and can be used to examine a larger variety of factors within less time." (1998: xv). The interest in simulation is growing also as researchers become increasingly concerned with the dynamical properties of social systems (Gilbert & Conte 1995). In particular Conte et al (1997: 13) attribute it to:

  • growing interest in emergent structures;
  • difficulties of conventional analytical methods of research and empirical method; and
  • growing interest in decentralised phenomena and self-organisation.

Clearly, simulation methods are attractive as adjuncts to, or substitutes for, conventional methods and for the further testing or development of social theory.

The Role of Technology

Physical simulation has long been an approach adopted for testing systems with large numbers of degrees of freedom. Wind tunnels are used to examine aircraft or car body designs for stability and aerodynamics for example. Analogous processes in social science may include role-play and socio-drama. It is with the advent of the computer, however, that simulation becomes a far more relevant tool for social researchers. The reason is, as Simon states:

A computer is an organization of elementary functional components in which, to a high approximation, only the function performed by those components is relevant to the behavior of the whole system (1996: 17-18).

In other words, in the computer we have a means of building functionally equivalent models of a very wide range of natural and/or artificial systems.

As a method of study, simulations have two broad potential applications. The first is to examine the behaviour of 'real world' systems and the second is to explore 'artificial' systems. For the first, simulations are built upon models that are simplified analogues of the real world systems they have been constructed to assist with understanding. For the latter, models may be constructed which have no (current) real world analogue, in order to explore what could or might happen should such a system come into existence (or have once existed).

Potential Applications of Simulation

Looking at the potential applications in more detail, Axelrod (1997) proposes the following purposes towards which simulation may be employed: prediction, performance, entertainment, training, education, proof and discovery. Of these, prediction, proof and discovery are most relevant to this discussion. By prediction, Axelrod means able to act on complex inputs to reveal 'consequences as predictions'. The results may then be used to refine or develop an associated theory. By proof he suggests that simulation can be used to demonstrate common and robust characteristics of systems, while discovery refers to simulation's potential to reveal the unexpected, to expose unanticipated relationships or results by making explicit what was hidden in the implicit 'rules' of operation or characteristics of components. He proposes that simulation be seen as'a new way of conducting science' - one which bridges traditional inductive and deductive approaches. He states for example that:

Like deduction, it starts with a set of explicit assumptions. But unlike deduction, it does not prove theorems. Instead simulation generates data that can be analysed inductively. Unlike typical induction, however, the simulated data comes from a specified set of rules rather than direct measurement of the real world (1997: 17).

For her part, Conte (1998) identified the following possible uses of simulations in the social sciences:

  • assess stability of a given phenomena;
  • extrapolate findings from simulation to real world;
  • check robustness of models by exhaustive search of parameter space;
  • explore hypotheses, question existing theories; and
  • construct working systems.

Simulation involves constructing a testable model and the process for developing simulations generally proceeds from theory to model to simulation. The theory itself may be more or less strongly related to a real system. However, while simulations are often derived from explicit theory, they also contribute to theory development because they "provide an explicit and systematic way of deducing the implications of a theory as it operates under particular circumstances to make predictions about outcomes over time" (Hanneman 1995).

The Particular Relevance of Simulation to Understanding Complex Systems

In order to explain a particular phenomena, it is necessary to identify a mechanism which, given the properties of the constituent components and of the environment, gives rise to the phenomena of interest. Having identified and described some phenomena of interest then, it is necessary to identify those processes which give rise to it at a level below that at which the phenomena is observed. With linear systems, this is simple. The process of analysis will point directly to the source. Where there is non-linearity present, analysis will not suffice, as the mechanisms which give rise to the phenomena cannot be located in the individual constituents but rather are a property of the system as a whole. This is not to say, however, that the target phenomena cannot be substantially explained using relatively few degrees of freedom.

Hence modelling can prove very beneficial in locating the minimum set of variables parameters and system characteristics which give rise to the phenomena under study. The dynamics of non-linear systems are often described using state space models. The dynamics of such a system can be mapped as a trajectory in state space (the space of all possible states) - this trajectory is referred to as an 'attractor' (Stewart 1990). A system operating 'on an attractor' will display constrained variability. While it is on that 'attractor', its behaviour may be described using relatively few dimensions. In many instances, provided the phenomena of interest are bounded by operation on an 'attractor', it is possible to simulate the phenomena by capturing the dimensions that account for the regularity in a model. Identifying the important dimensions is not a simple undertaking however. These dimensions can be pursued by analysis but the researcher will soon face the need to make inferences or establish premises in order to arrive at a simplified explanation. The problem is that, as Simon notes, "it may be very difficult to discover what [our premises] imply." (1996: 15). It is difficult to be certain that the simplified model has preserved all that was important in the original situation. This certainty cannot be arrived at deductively. Building a simulation model that preserves the variables believed to be important can facilitate testing of whether or not chosen assumptions give rise to the expected phenomena. Further, by subjecting the model to a wide variation of internal or external changes it is possible to examine what impact this has on the phenomena at the macro-level and examine the implications this may have for the 'real' system. In this way the validity of the simplified model can be examined and tested for reasonableness and robustness. We have a basis for a two- way test-from theory to model and from model to real world. This form of testing is greatly simplified if the model can be rendered as a computer simulation.

Agent Based Simulation

Many traditional approaches to simulation (mathematical and statistical) have taken as their starting point macro-level abstractions (Hanneman & Patrick 1997). These models embody many high level assumptions and are therefore commonly tautological - encapsulating the basis of what it is they are intended to explain. Conversely, agent based approaches avoided intermediate explanations, attempting to reproduce macro-behaviour by changing micro-agent details and interactions. Agent based architectures are becoming much simpler to construct due to the development of object oriented programming languages. Consequently multi-agent, or Distributed Artificial Intelligence (DAI) models have become increasingly popular (Brassel et al 1997).

Multi-agent models are attractive because, as Brassel et al (1997) argue, they can "cover the full range of conceivable models". The approach is intrinsically flexible and embodies many desirable attributes such as easy scalability and expandability. These attributes are not readily achieved where models are derived from higher level abstractions. Significantly, not only can multi-agent approaches be an effective substitute for alternatives, but where an approach such as a mathematical model using differential equations is established, for example, as a reliable model for a particular task, it can be embedded into a broader multi-agent approach. In addition, multi- agent simulations can be used to model continuous or discrete state variables, lending themselves equally to linear or non-linear modelling tasks. They can be readily assembled at single or multiple levels and can be designed to provide simple reporting including visual representation of whatever state conditions are of interest to the researcher (Ferber 1999).

While agent-based models offer a great deal in principle, as Brassel et al (1997: 59) demonstrate, the suite of currently available simulation platforms and tools impose a range of quite specific constraints which make each suitable for a limited class of problems. Key issues include the assumptions made in implementing agent properties such as intelligence or autonomy and agent architecture (deliberative vs reactive), see Conte et al (1997: 11). Within DAI agents are commonly classified as (adapted from Brassel 1997: 56):

  • Reactive - receive and respond according to fixed rules;
  • Intentional - include meta-rules to define goals. Are capable of detecting goal conflict within specified bounds;
  • Social - contain models of other agents and can reason about agents goals expectations and motives and incorporate these into their own action.

There are clearly a very large number of assumptions that can be built into agents. This is part of the attractiveness of the method as it allows agents to be built to reflect assumptions about behaviour typical of a chosen theory. The assumptions made, however, following the intrinsic dispositions of current disciplines, and as a consequence may reflect a systematic bias towards linearity and teleology (Goldspink 2000a, 2000b). Inappropriate assumptions may limit significantly the potential for simulation to contribute to new areas of understanding and diminish its value in practice.

Simulation Design

Fishwick (1995) identifies three stages to the development of a simulation:

  • model design;
  • model execution; and
  • model analysis.

Each involves separate and distinct skills and imply different constraints. Model design for example may imply developing an abstract logical or mathematical formalism that captures the characteristics of interest to the theory or real world situation. Model execution implies transferring the design to a suitable simulation platform, coding it using an appropriate computer language and choosing hardware commensurate with the demands of the model design. With analysis, Fishwick notes that models are designed to'provide answers at a given abstraction level'. Consequently, the type of analysis envisaged is an important input to the design process, as is the possibility of scaling the model, should alternative levels of analysis be necessary.

Axelrod (1997: 18), states that in designing or implementing simulation three goals should be sought: validity, usability and extendibility.

Validity refers to either of two aspects. Firstly 'internal validity', which concerns the correspondence between the theoretical or abstract model to be simulated and the implementation. Secondly 'external validity', or the degree to which the model and simulation corresponds to the real world. As a check on validity, Troitzsch (1997) argues that in developing simulations, especially those that are to be used for prediction, they should first be tested for their ability to explain past observations. A major problem with the establishment of the validity of a model is emergence. As emergent phenomena are frequently counter-intuitive, 'surprising' results must be demonstrably the result of the model, not artefacts or errors built into or resulting from the simulation method or the software or hardware upon which it is built. Proving this can be non-trivial.

Usability refers to the need for the simulation to make meaningful experimentation possible and intelligible to users. Finally, as simulations are frequently used to test ideas and to contribute to their extension and development, models which are non-extendable are of considerably less value than those that are.

Agent Design

Where agent based simulations are to be used, it is necessary to specify the characteristics of agents and their action potential, or capacity to interact. Parunak (1997) suggests that one way in which to analyse a system in order to determine appropriate agents and agent action potential is to decompose the narrative description of the model. Nouns indicate potential agents and verbs the actions they need to be able to perform (and hence the agent's type or class). Excessive reification in a model can present significant problems here, as reified definitions of an agent will incorporate bundled assumptions which may be inappropriate or which will present practical difficulties in implementation and/or validation. In order to avoid building in inappropriate high order assumptions it is necessary to define and build agents on the basis of non-teleological statements. What is needed, if a viable agent based system is to be designed, is to analyse all appropriate levels of the system under study in terms of final causes. The narrative may also point to different levels of agent (implied hierarchy), further assisting design.

Analysis of Results

Simulations which incorporate non-linear action potential present problems for analysis. Axelrod (1997: 18- 19) discusses the problem for analysis posed by path dependence and suggests that this can be dealt with in several ways. One is to tell a story of model behaviour as 'news' ie sampling states in chronological order. Alternatively it may be reported using the vehicle of the history of one agent, or yet again, of the system as a whole - the macroscopic point of view. Due to the problem of sensitivity to initial conditions, observations may need to be made and compared over many runs. Statistical analysis could then be used to identify patterns. Care should be taken not to resort to simple averages however, as such systems are generally at their most interesting when analysed from the perspective of the difference in output resulting from changes in initial conditions.

Limits to Simulation

While computer simulation clearly offers significant advantages if designed well and if placed within a clear research framework which enable the distinctive problems to be managed, there are some significant challenges to be addressed if the approach is to gain wide acceptance. Many of these relate to traditional criteria for methodological acceptability and rigour.

Repeatability and Comparability

Sensitivity to initial conditions with many complex systems means that there may be little validity in directly comparing the response of one system with that of another. This is so because a system's response to perturbation is dependent on its structure and it's history and no two systems will be identical in structure or in their history. Furthermore, there is no basis for believing that a systems response to a given perturbation at one time will be similar to its response to the same or a similar perturbation at a later time. This is particularly the case where a previous change has involved the system leaving one 'attractor' and moving to another on a level that is important to the observation being made.

These constraints apply to simulations as well as to real systems only more so. Simulations rely on establishment of functional equivalence (ie the structure of the simulation is different from the real world but is designed to produce functionally equivalent behaviour). Functional equivalence can be expected to be bounded. Consequently, differences can emerge in the form of artefacts, which can significantly impact on the results of the simulation and may be difficult to separate from the consequences of the underlying model. Subtle differences in implementation such as how time and concurrence are managed, for example, have the potential to greatly influence the outcome of computer based simulations.

In order for a simulation to be repeated, details of the model and the specifics of the implementation need to be communicated. These include the associated parameters used and their starting values, initial values for all variables, and possibly, even the software used to build it and the hardware on which it was run! As Axelrod notes (1997: 19) this is difficult using existing vehicles for communicating research results such as journals and compilation publications. Communicating the results of a simulation without the details of the model and its implementation is of little value. There is often a need for a lengthy report to do justice to the complex design and results. This problem may be overcome with the broader acceptance of electronic journals which make it feasible to, for example, either provide detailed attachments including source code, or to provide access by other researchers to the fully operational model on-line. This can be important as it eliminates complications such as different hardware and software environments from the equation, simplifying replication .

For the results of any simulation to be considered robust, the results of a particular implementation of a model should be able to be replicated using an alternative simulation platform capable of modelling the same behaviour. Axelrod (1997) has observed that such replication is seldom attempted. To ascertain the robustness of findings from a range of published simulations Axelrod, Cohen and Ricolo set out to replicate a range of previous models using the SWARM simulation system. They report (Axelrod 1997) achieving very comparable results to the original implementations for the simulations they chose. As a result of the experience they identified the following problems which complicated the task of replication:

  • ambiguity-in model description and presentation of results;
  • gaps in descriptions;
  • erroneous published description.


The ability to infer that the results of a specific simulation or even a set of runs of a simulation reveal results that would apply in alternative contexts is problematic. Again, due to sensitivity to initial conditions and path dependence, a simulation may yield little which is generalisable - at least in terms of specific outcomes. It may be possible to infer generality within bounds, or to identify patterns of behaviour typical of a class of system of which the simulation is one example. However, as Kollman et al (1997: 462) note "The risk of any one computational model being "a mere example" unfortunately exists".


Troitzsch classifies the types of prediction possible using simulations as either qualitative: which is to do with modes of behaviour of a system or classes of system, and quantitative, that is specifying input conditions and simulating actual output states for a given set of parameters. He notes that:

If sensitivity analysis has yielded the result that the trajectory of the system depends sensitively on initial conditions and parameters, then quantitative prediction may not be possible at all. And if the model is stochastic, then only a prediction in probability is possible (1997: 49).

The contingent, path dependent nature of many complex system dynamics implies that "any attempt at simulation as a way of seeing what might be as opposed to what has been is a pointless activity" (Byrne 1997, p. 2).

On the other hand, simulation offers the possibility of reproducing historical dynamics and therefore to explain retrospectively the possible reasons for observed behaviour. Furthermore, it should be possible to identify the conditions that lead to certain behaviour or classes of behaviour in a system. This has some 'forward reach' (Byrne 1997) since if a systems behaviour or mode of behaviour can be mapped with change in parameters, observation of parameter change will allow the corresponding changes in system mode to be predicted. Byrne goes on to distinguish between prediction and prescription, arguing that:
We can't know what will happen regardless of our acts. We can know what might happen if we act a certain way... In this way of thinking simulation is clearly a tool which helps us not know what will happen, but what can be made to happen (1997: 5).

Data Availability

Development of models, if they are to say anything meaningful in terms of past, current or possible states of real systems, requires the identification of relevant parameters and variables. Values have to be assigned to these parameters and variables and this may involve calibrating those values to real world equivalents. The availability of suitable data can be a significant problem. Often data will need to be estimated. Due to the possibility of sensitivity to initial conditions, the consequences of any such estimation need to be explored. This may be done by running the model using a range of values. Large divergence of behaviour with relatively small changes to some parameters indicates high sensitivity. Effort can then be made to improve the accuracy of the data for those parameters. Conversely, some parameter value changes may have little or no impact on the model. These parameters can be eliminated or data estimated more crudely with some confidence that this will have no significant consequences for the model. A strong caution is needed here, however, as it is possible for changes in one parameter to alter the sensitivity of another, sometimes dramatically. Many test runs may be needed, exploring a wide range of the system's parameter space to ensure that the simulation is insensitive to such error. This may become a major undertaking (or may be impossible) in models with large numbers of parameters. Fortunately, this process can be automated in some simulation platforms.

As a further complication, models of social systems may include variables which are highly abstract and cannot be easily 'measured' (eg 'arousal', 'commitment'). Even where such variables can be scaled, there may be little information about the linearity of such a variable, or the point at which it ceases to be one and becomes another ('frustration' building to 'anger'). This compounds the problem of calibration and models should generally avoid such abstractions wherever possible.


Testing a model for validity implies seeking confirmation of functional equivalence at least within the range of parameters characteristic of the system being modelled. Two aspects of simulation work make this difficult. Firstly, simulations are often used as exploratory vehicles, to examine behaviour of a real system. As the simulation is used to understand the system being modelled, it is difficult to directly calibrate or validate against the real system, which is, as yet, not understood. Secondly, as the systems being modelled are frequently very complex (hence the choice of simulation in the first place) making complete comparisons between model and real world behaviour is commonly not possible. This leads to the need to choose a mixed methodology, one where validation and experimentation can take place iteratively, with a methodology directed at seeking verification of modelled behaviour taking place concurrently with the simulation modelling.

To the extent that data from a simulation fails to compare with empirical observation of a comparable real situation, several possibilities exist. Firstly, the theory may be wrong. Alternatively, the translation of a theory into a concrete model may be flawed. Similarly, translation of the model into a simulation may be the source of the problem. Finally, the platform on which the simulation was built, or seemingly minor design details, may be introducing artefacts which are affecting the outcome (Leik and Meeker 1995). Deciding which of these problems (or combination of them) has occurred can be a difficult process. The latter may be helped by increased adoption of multi- platform languages (such as Java). If the results of a simulation can be replicated on alternative platforms, this tends to remove the possibility that divergence from expected behaviour has platform specific artefacts as its source.

Leik and Meeker (1995: 465-466) suggest a set of rules for building simulations which have some hope of being validated.

  1. every tie from the simulation to the model to the substantive theory needs to be made explicit;
  2. the way each algorithm in the simulation works needs to be laid out so others can judge its appropriateness;
  3. every constraint on variable, parameters, numbers of runs and so forth needs to be justified;
  4. every decision about what to examine and report needs to be made explicit;
  5. all justifications must be in light of the substantive interpretations to be made of the model being simulated.

Adoption of such discipline would significantly advance the approach but these are substantial demands that will at times be difficult if not impossible to fully accommodate. A quasi-structured language with broad applicability, ie that can be used for a wide range of simulations of different types of phenomena would greatly assist. These are being developed in the computing industry and suggestions have been made to extend them to the social simulation field also.

* Simulation: Where Does it Fit?

Given the above, just where does computer based simulation fit as a method? It should be apparent that as a stand-alone method, there are significant problems with computational approaches. There is a very real risk that computational models will become ends in themselves, both the subject and object of study. Some have argued that the models used by neo-classical economists have become just this, interesting toys with little or no relevance to our developing understanding of real economies (Ormerod 1995, 1998, Galbraith 1994, Hodgson 1996, Arthur et al 1997).

To summarise the position established above, social researchers have long confronted difficulties in isolating phenomena of interest from other environmental variables. This has posed problems for the rigorous experimentation and theory testing exemplified by the natural sciences leading to advocacy of 'anti-naturalist' method in some social disciplines, particularly sociology, anthropology and some branches of behavioural and psychological disciplines. Simulation is becoming a popular means of describing and exploring complex systems, both natural and social (Hanneman & Patrick 1997). For some, simulation offers a method that can conserve methodological holism while avoiding the pitfalls of the epistemological relativism of post- modernism. There remains the problem though of relating the simulation and any results obtained back to real world phenomena. The method of simulation would seem to offer most then when used in conjunction with some existing methods which help to test it's theoretical and empirical relevance if not validity.

* Methodology Appropriate to Understanding Phenomena Resulting from Non- linearity.

McKelvey (1999), in a contribution to the first edition of the journal Emergence, makes some useful observations about the scientific methodology appropriate to complex systems, particularly as applied to organisations, but arguably relevant to any social application. In arguing for the need for greater rigour in the application and extension of complexity theory to the social domain, and recognising that such an extension implies application to situated and dynamic abstract constructions as well as concrete ones, he sets out the three frameworks shown in Figure 1.

In the axiomatic approach, which is typical of positivist method, theory is developed from a set of axioms. The theory is then used to develop a model and the model is tested against the phenomena of interest. In such an approach, there should be one model consistent with explaining the phenomena. If the model predictions are confirmed by empirical study of the phenomena then the theory is regarded as correct (by dint of having passed the test of falsification). The axiomatic approach is generally regarded as appropriate for the natural sciences and has been adopted also within economics. It has proven difficult to apply in many social sciences, in particular those where the phenomena under study is emergent - the product of complex interplays between actors and environments. This includes sociology, anthropology, organisation science and the behavioural and management sciences. As noted, this difficulty has led to its rejection by many on philosophical and/or practical grounds. This rejection is however interpreted by some as a retreat from science and raises concerns as to the strength of legitimacy claims for these social disciplines. Conversely, where the axiomatic approach has been adopted in social disciplines (such as economics) there are critics who argue that it can only be achieved by imposing often inappropriate simplifying assumptions and that this makes the resulting models poor analogues of the situation under study, again making the results questionable. While accepting that the axiomatic approach is inappropriate for much social science, McKelvy's maintains a concern that many social theorists, in rejecting such a method, have failed to identify a suitable replacement which furnishes adequate tests of validity. In other words, he is concerned that some social disciplines have relaxed tests for legitimacy too far.

Fig 1
Figure 1. McKelvey's (1999) conception of the Axiom-theory-model- phenomena relationship.

The most common use of models in sociology and the organisational and behavioural sciences is as a means of communicating theory. From this perspective multiple models are possible and may be necessary. This is the approach illustrated under the heading of 'Organization Science Conception' in Figure 1. In this schema the model is illustrative. As such it is of limited value as a basis for testing the legitimacy of theory. Such models are also frequently gross simplifications and isolate particular phenomena. Most are not stated in terms sufficiently precise to allow for them to be rendered as a simulation. Therefore, they have a low level of 'instrumental reliability'. At the same time, the real phenomena to which the model points may difficult to isolate and measure directly. Consequently, both model and theory are relatively informal and this limits the ability to formulate from them testable propositions and to decide between alternative conceptions. The approach leads to hypotheses that are difficult to falsify and thus it is difficult to eliminate weak theory.

Consequently, McKelvey argues for the semantic approach shown in Figure 1. Here the theory and model are viewed independently. Truth testing takes place in two ways. Firstly, experiments are conducted using the model. This allows the prediction of theory to be tested in a controllable way, albeit in a simplified analog of the real phenomena. Secondly:
ontological adequacy is tested by comparing the isomorphism of the models idealised structures/processes against that portion of the total "real-world" phenomena defined as "within scope of the theory (1999: 18).

Here the model can and should be highly formal. Where the phenomena to be investigated is dynamic, and particularly where non-linearity is present, the model will need to represent a close analogue to the dynamical properties of the real world if it is to reproduce comparable behaviour. The bounds of the 'analogous' design will need to be well established and clearly defined. Hence, using this method there is a two-way test, 'theory-model', and 'model-phenomena'. Further, the model is more rigorously founded than that typical of the Organization science conception. Note that alternative theoretical conceptions may be used to understand any model, and alternative models derived from any given theory. Rigorous experimentation on the real world is severely limited for all complex systems due to the possibility of sensitivity to initial conditions at some level and consequent history dependence. McKelvey notes that it is only formalised and testable models that lend themselves to systematic exploration and experimentation. The increasingly widespread adoption of simulation modelling for complex systems represents a means for making the semantic method operational.

A Suggested Framework

To support the application and extension of simulation as a rigorous basis for social research an extended form of McKelvey's semantic model is proposed.

Fig 2
Figure 2. An extended semantic model.

This demonstrates the role of simulation as a method for examining complex systems. To give effect to this method two complementary approaches are required, simulation and situated research.

Here initial study of the phenomena using situated research is followed to identify fundamental precepts and to choose meta-model elements as a guide to development of a specific simulation. This research will also contribute to initial theory development. Situated methods will contribute to identification/evaluation of:

  • the degree of structural isomorphism between phenomena and the model;
  • relevant parameters;
  • relevance and adequacy of model scope (boundary);
  • parallel behaviour at extremes;
  • ability of the model to replicate behaviour of the phenomena;
  • existence or not of anomalous behaviour;
  • predictive capability of the model.

All of these are important considerations Shreckengost (1985) and can be evaluated by systematic experimentation using the model. The model behaviour guides investigation of the phenomena, which, as it can be approached only in situ, must be studied using situated research methods. At the same time the model behaviour is systematically compared with behaviour predicted by the theory. Again we have a two-way test, the theory guides the model and the model is used to refine the theory as its behaviour is systematically tested both against the predictions of theory and the behaviour observed as characteristic of the phenomena. Experimentation in the more traditional scientific sense is restricted to the model/theory relationship. The simulation model thus forms the core to the investigation, providing the basis for theory elaboration and development and for testing the theory for empirical validity.

The meta-model serves to formalise precepts associated with the broad class of phenomena under investigation. It provides a foundation set of assumptions for the subsequent simulation. It should be designed in such a way that it incorporates low-level assumptions which are 'safe' for the given research.

Complementary Methods: The Case for Situated Research as a Complement to Simulation

The implications of the extended semantic is that simulations can best be advanced and tested by concurrent use of simulation and research that is more conventional. Testing the mapping between the model and phenomena will necessarily require situated research and possibly inductive method, where a number of 'real world' situations are studied in order to develop a general model which captures phenomena of interest. Situated research approaches are well established and represent increasingly widely accepted post-positivist method in the social sciences. They include the Naturalist approaches of Lincoln and Guba (1985), Cooperative Experiential Inquiry and The Dialectical Paradigm (Reason & Rowan 1981), Action Research (McTaggart 1991, Winter 1987, Whyte 1991) and Soft Systems Methodology (Checkland & Scholes 2000). These methodologies are of particular value for capturing dynamical patterns in real world systems. More traditional, and in particular quantitative study methods, tends to take a snapshot in time of the system being studied and are ill suited to identifying dynamical pattern. They may, however, be incorporated within these action frameworks. Situated research techniques provide a basis for systematic investigation of complex and embedded systems and for both describing and developing an explanation of the origins and dynamics of phenomena of interest.

Use of simulation methods in parallel with observational and experimental studies are demonstrated by the work of Carley et al (1998). In their study of organisational performance and the influence of organisational design and cognitive capability they compared real agents with simulated agents directly on identical tasks. This is one example of how simulation can be adopted in combination with traditional alternatives to good effect.

Communicating the Results of Simulation Research.

Having established a viable simulation which is judged to be consistent with theory and confirmed as relevant based on critical comparisons with real phenomena, a question remains as to how it may best be communicated. A simulation model is itself a basis for demonstrating and communicating the dynamical properties of a system. However, it is unreasonable to expect anyone who wishes to understand the dynamical properties of a system to conduct repeated runs of a simulation in order to discover it. Researchers using simulation need a more concise mode of communicating the results and for describing and substantiating any claimed links between the results of the model and the behaviour of any real system of which it is an analogue. Tsoukas points to the importance of qualitative description as a tool for capturing what is essential in the unfolding historical development of dynamical systems - real or simulated.
Qualitative descriptions seem to be best suited for capturing the circular texture of organisational phenomena. How else could one hope to do justice to the historicity of the phenomena to be explained, if not by narrating how the actions of interacting agents and the occurrence of chance events, unfolding in time, have been intertwined to generate the phenomena at hand (1998: 303).

For all systems which are history dependent (as non-linear systems may be) a means of describing historical trajectories is an essential tool. Griffin says of narrative:
Narratives are analytical constructs that unify a number of past or contemporaneous actions and happenings, which might otherwise have been viewed as discrete or disparate, into coherent relational whole that gives meaning to and explains each of its elements (1993: 1097).

Narrative technique can capture unfolding patterns, but if chaos is present and hence sensitivity to initial conditions, can narrative capture the detail needed to aid understanding? Reisch argues:
if puny and unknowable details do in fact play an essential role in some particular history, narrative accounts of that history need not have access to that detail. The narrator can still describe and emplot events and the effects of that detail even though the detail itself and its causal power is not recognised. As a causal explanation the resulting narrative would appear, from some ideal vantage, to be incomplete or incorrect. But at least it would remain parallel and in step with events that actually occurred (1991: 18).

Used in the context of the modified semantic model, the capacity for narrative method to capture and communicate the observation of pattern in real world systems can then guide active experimentation using simulation models to identify the details that drive the pattern. The two methods complement and add rigour to one another.

The value of narrative technique then is not restricted to aiding with testing model/phenomena mapping but is important also in being able to describe the behaviour of the models themselves. Tsoukas' views, for example, suggest that while simulation presents a formal and precise means of communicating a theory or model, interpreting and communicating the results often require a return to narrative. Furthermore, with agent based simulation, it will often be possible, for a given range of parameters and observation at a given level of precision, to identify more than one set of micro-specifications or rules which generate a given macro-phenomena. Thus a particular micro-model may be regarded as sufficient but not necessary. This implies the need to support choice of one model over another and ultimately such an argument should rest on plausibility or correspondence to some empirical observation. Hence the circle between real phenomena and model can be tested, closed, and then communicated using the descriptive potential of narrative.

Situated methods are largely derivative of post-modern schools of thought. As such they are sometimes criticised for lacking the rigour or more conventional scientific method. This is particularly true for narrative approaches. If such techniques are to be adopted and accepted, their use within the modified semantic method, where they are used side by side with simulation, should address many of these concerns. Techniques such as 'event structure analysis' (Griffin 1993) have also been suggested as a means for increasing the rigour of narrative method and remain relevant to its use in the proposed context.

* Conclusion

The need for methods suited to understanding and examining complex non-linear phenomena has been established. Existing research method is poorly suited to such contexts. This limitation applies in natural science but also within many social science disciplines. While 'non-naturalistic' methods have provided a means for coming to terms with the specific challenges of social phenomena, there remains concern that they fall short of meeting the criteria for 'good science'. This has led to a contentious meta-discourse within many social science disciplines and a growing plurality of method. There is growing argument that non-linear systems concepts have applicability to social phenomena as well as natural. There is a view also that method consistent with this approach challenges the traditional modern/post- modern discourse. Computer simulation is increasingly seen as one method with characteristics well suited to the study of complex phenomena-social and natural. The method still faces some opposition and limits to acceptance however. In this paper a framework for integrating computational methods into a broader research framework, adopting more situated techniques to complement the advantages and compensate for the disadvantages of simulation has been proposed. This methodology should address concerns arising from both modernist and post- modern critics of current methodologies in all social science. In the behavioural sciences, where there is a concern as to standards of rigour, the simulation component of the methodology adds a basis for model building and testing which can help decide between alternative truth claims as well as for substantiating theoretical validity. In disciplines were reductionist method is still entrenched, such as economics, the methodology requires a more robust testing of models against real-world phenomena and points to methods of benefit in testing the legitimacy and influence of the assumptions upon which the models are built. This will add relevance to the theory, ensuring an alignment with real world phenomena of interest.

* References

ARTHUR, W.B., DURLAUF S. N & LANE D.A. (1997), The Economy as an Evolving Complex System II, Addison-Wesley, Reading MA.

AXELROD, R. (1997), 'Advancing the Art of Simulation in the Social Sciences', Complexity,Vol. 3, No 2, John Wiley N.Y. p.p. 16- 22.

BAERT P (1998), Social Theory in the Twentieth Century, Polity.

BOOKCHIN, M. (1995), The Ecology of Freedom: The Emergence and Dissolution of Hierarchy,Black Rose, Montreal.

BRASSEL K. H. MOHTING, M. SCHUMACHER E. & TROITZSCH K. G. (1997), 'Can agents Cover All the World'. In Conte R. Hegselmann R. & Terna P. eds. Simulating Social Phenomena, Springer, Berlin.

BURRELL G. & Morgan G. (1994), Sociological Paradigms and Organisational Analysis, Virago, London.

BYRNE, D. (1997) 'Simulation - A Way Forward?' Sociological Research Online, Vol. 2, no. 2, http://www.socresonline.org.uk/socresonline/2/2/4.html

CARLEY, K. M , PRIETULA M.J. & ZHIANG L., (1998), 'Design vs Cognition: The Interaction of agent cognition and organizational design on organizational performance', Journal of Artificial Societies and Social Simulation, Vol. 1 No. 3, https://www.jasss.org/1/3/4.html

CASTI, J.L. (1994), Complexification: Explaining a Paradoxical World Through the Science of Surprise, Abacus, U.K.

CHECKLAND, P. SCHOLES J., 2000, Soft Systems Methodology in Action, Wiley, Chichester.

CONTE, R., HEGSELMANN R. & TERNA P. eds. (1997) Simulating Social Phenomena, Springer, Berlin.

CONTE R (1998), email subject titled, Carts and horses in computer simulation for the social sciences, posted to SIMSOC list server, simsoc@mailbase.ac.uk, 26 January 13:55:02.

EPSTEIN J.M & AXTELL R. (1996), Growing Artificial Societies, MIT Press, Cam. Ma.

EVE, R.A., HORSFALL S & LEE M.E., (1997), Chaos, Complexity and Sociology, Sage, London.

FERBER J. (1999), Multi-agent Systems: An Introduction to Distributed Artificial Intelligence,Addison -Wesley, New York.

FISHWICK P. (1995) 'What is Simulation', http://www.cis.ufl.edu/~fishwick/introsim/node1.html.

GALBRAITH J.K. (1994), The world economy since the wars,Sinclair-Stevenson, London.

GILBERT, N. (1996), 'Computer Simulation of Social Processes', Social Research Update, Issue Six, http://www.soc.surrey.ac.uk/sru/SRU6.html.

Gilbert, N. & CONTE R. eds. (1995), Artificial Societies, UCL Press, London.

GILBERT, N. & TROITZSCH K. G. (1999), Simulation for the Social Scientist,Open University Press, Buckingham.

GOLDSPINK, C. 2000a, 'Contrasting linear and non- linear perspectives in contemporary social research', Emergence, Vol 2 No 2. pp. 72-101.

GOLDSPINK, C. 2000b, 'Modelling social systems as complex: Towards A social simulation meta-model', Journal of Artificial Societies and Social Simulation, Vol 3 No 2, https://www.jasss.org/3/2/1.html

GRIFFIN L.J. (1993), 'Narrative, Event-Structure Analysis, and Causal Interpretation in Historical Sociology', American Journal of Sociology, Vol 98, No. 5 p.p. 1094-1133.

HANNERMAN R. A. (1995), 'Simulation Modeling and Theoretical Analysis in Sociology', Sociological Perspectives, Vol. 38, No. 4 p.p. 457-462.

HANNERMAN R. A. and PATRICK S. (1997), 'On the Uses of Computer-assisted Simulation Modeling in the Social Sciences, Sociological Research online,Vol. 2, No. 2, http://www.socresonline.org.uk/socresonline/2/2/5.html

HODGSON G. M. (1996), Economics and Institutions, Polity Press, Oxford.

Holland, J.H. (1998), Emergence: from chaos to order, Addison Wesley, MA.

ILGEN, D.R & HULIN C.L. (eds) (2000), Computational Modeling of Behavior Organizations: The Third Scientific Discipline, American Psychological Association, Washington DC.

KENNEDY J. & EBERHART R.C. (2000), Swarm intelligence, Morgan Kaufmann.

KOLLMAN K, MILLER J.H. & PAGE S. (1997) 'Computational Political Economy', in Arthur W.B., Durlauf S. N. & Lane D.A. (eds.), The Economy as an Evolving Complex System II, Addison-Wesley, Reading Ma.

LEIK, R. K. & MEEKER B. F. (1995) 'Computer Simulation for Exploring Theories: Models of Interpersonal Cooperation and Competition', Sociological Perspectives, Vol. 38, No. 4, p.p. 463-482.

LEWIN R., PARKER T. & REGINE B. (1998), 'Complexity Theory and the Organization: Beyond the Metaphor', Complexity, Vol 3 No. 4. John Wiley, p.p. 36-40.

LINCOLN, Y.S. & GUBA E.G. 1985, Naturalistic Inquiry, Sage N.Y.

MCTAGGART R. (1991), 'Principles for Participatory Action Research', Adult Education Quarterly, Vol 41, No. 3, p.p. 168-187.

MARION, R. (1999), The Edge of Organization: Chaos and Complexity Theories of Formal Social Systems, Sage, CA.

MCKELVEY, B., (1997), 'Quasi-Natural Organisation Science', Organization Science,

MCKELVEY, B. (1999), 'Complexity Theory in Organization Science: Seizing the Promise or Becoming a Fad?', Emergence, Vol 1 No 1., p.p. 5-32.

ORMEROD, P. (1995), The Death of Economics, Faber and Faber, London.

ORMEROD, P. (1998), Butterfly Economics, Faber & Faber, London.

PARUNAK, V. (1997), 'Towards the Specification and Design of Industrial Synthetic Ecosystems', Paper presented at the Fourth International Workshop on Agent Theories, Architectures and Languages (ATAL'97), Industrial Technology Institute, http://citeseer.nj.nec.com/parunak97toward.html

PRIETULA, M.J. CARLEY K.M. & GASSER L. eds. (1998), Simulating Organizations, M.I.T. Press, Ca.

REASON P. & ROWAN J. eds. (1981) Human Inquiry: A Sourcebook of New Paradigm Research, John Wiley.

REISCH G.A. (1991), 'Chaos, History and Narrative', History and Theory, Vol 30, p.p. 1-20.

ROSENAU P. M. (1992), Post-modernism and the social sciences, Princeton University Press, N.J.

SHRECKENGOST (1985), 'Dynamic Simulation Models: How Valid Are They?', in Rouse B.A, Kozel N. & Richards L.G (eds) Self Reporting Methods of Estimating Drug Use: Meeting Current Validity Challenges, NIDA Research Monograph 57, Washington.

SIMON, H.A. (1996), The Sciences of the Artificial, 3rd edition, The MIT Press, Cam. Ma.

STEWART I. (1990), Does God Play Dice - The New Mathematics of Chaos, Penguin.

TROITZSCH K.G. (1997), 'Social Science Simulation: Origins, Prospects and Purposes', in Conte R. Hegselmann R. & Terna P. eds. (1997) Simulating Social Phenomena, Springer, Berlin.

WHYTE W.F. ed. (1991), Participatory Action Research, Sage.

WINTER R. (1987) Action Research and the Nature of Social Enquiry, Gower, U.K.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2002