© Copyright JASSS

JASSS logo ------

Quantitative Sociodynamics: Stochastic Methods and Models of Social Interaction Processes

Helbing, Dirk
Springer-Verlag: Berlin, 2010
ISBN 9783642115455 (pb)

Order this book

Reviewed by Bruce Edmonds and Mario Paolucci
Centre for Policy Modelling, Manchester Metropolitan University and LABSS, ISTC-CNR Rome

Cover of book This book describes a physicist's approach to modelling social systems, namely by focussing on how to use analytic mathematics for this purpose. The central premise of the book is that there are now mathematical techniques that are adequate to the modelling of social phenomena - the book presents these with a few examples. For this reason, to assess the importance and usefulness of this book to social scientists it is necessary to first discuss the potential role of such mathematics. For those who are only interested in how good this book is at presenting the mathematics we suggest they skip down to the second section of this review.

The book as a proponent for an analytic approach to modelling social phenomena

Mathematics is a formal modelling technique - in other words the model can be precisely specified and communicated without error. Such formal models are very important to the social process of science since they enable the communal development of models, since they can be passed around, checked, investigated and variants made. Before the availability of accessible computational resources mathematics was the only formal game in town. Thus it is not surprising that good "science" became associated with mathematics and became its "lingua franca". However, now that computers are cheap and easy to use mathematics is no longer the only formal modelling choice.

The other advantage of using mathematics is that it holds out the prospect for demonstrating general form solutions - expressions that describe ALL the possible outcomes that could result from a given system (called "analytic mathematics" here). This is in contrast with a simulation approach where however many times you run the model, there may be examples where the behaviour deviates from how you have characterised the outcomes. In other words either the simulation occasionally behaves "oddly" (or very differently after a long time), or there are combinations of parameter settings where the outcomes are different from what is assumed.

However, analytic mathematics is hard[1]. Thus sets of equations that adequately describe the dynamics of complex systems are rarely directly analytically solvable in this manner. Rather one has to use one of a number of approximations and restrictions to the original set of equations in order to obtain analytic results. For example one may restrict the sort of question one is answering, e.g., only look at the average value over the population of some property of individuals, or make simplifying assumptions, e.g., assuming that 3rd order properties of the population are essentially random. Thus the original set of equations is not solved but rather a transformed set is solved - what one gets out is not a general characterisation of the original set of equations but a general characterisation of the simplified set. The rigour of such a characterisation thus depends upon the safety of the assumptions behind the simplifications. The art of knowing when and how one can safely simplify is what a good theoretical physicist has to be good at, not just how one can solve the equations. Indeed it is common that the conclusions of any such inferences are checked by using different assumptions or techniques and comparing the results that come from them (Cartwright 1983).

This contrasts with what happens in a simulation approach where, by and large, there is much less simplification in the inference step (running it) but one only gets specific examples. The simplification here happens either before in the design of the simulation or else in the characterisation of the outcomes from the set of simulation runs (e.g., graphs of averages). Neither of these are necessary to obtain the inference, although the more complex the simulation is then the more difficult it is to check it for bugs and understand the processes and outcomes from it. What is novel, though, is that now we have a choice, which technique is best suited to which problems or phenomena. This book assumes that mathematics is the way forward, its argument essentially being pragmatic, by demonstrating what can be done with some newly available mathematical approaches.

To assess the usefulness of a modelling technique one has to look at the strength of three stages in the use of a model (Hesse 1963): (encoding) the map from the known or hypothesised facts and processes into the model set-up, (inference) the deduction of results from the set-up to the outcomes, and finally (decoding) the mapping of the results back to the phenomena of concern. Roughly, the usefulness of a model is the reliability of the whole modelling chain: encoding + inference + decoding (Edmonds 2000).

The main criticism of the analytical mathematical approach is that it strengthens the inference step but at the cost of crucially weakening the encoding and decoding steps. If the total effect of this is to weaken the whole modelling chain then this is a valid criticism. Thus, for example, in economics it has been common to use frankly unrealistic assumptions in order to obtain analytical mathematical results. The mistake there was to assume that using mathematics is more important than maintaining relevance in the encoding steps and thus failed to produce useful science (Moss and Edmonds 2005). The question is whether the mathematics presented in this book is an improvement on such attempts.

One big advantage of working with physicists is their analytic promiscuity - they care relatively little about tradition in deciding what kind of analytic model they are going to use. This is in refreshing contrast to the situation in economics where authors will often go out of their way to document the support in terms of previous similar models and where certain methodological assumptions are sacrosanct (e.g., utility comparison in decision making, or the establishment of equilibria). Thus in this book, we have diffusion models, social force models and even gravity models! True to the physics culture it allows a relaxed approach to establishing the reliability of the encoding step - thus in this book the encoding step will appal many social scientists, for example reducing the influence between two interacting individuals as a force. However to a physicist this is alright because they are ruthless with checking the decoding step, in other words the match of the results against data.

There are two main uses for models that are relevant here: prediction and explanation. The way one uses a model depends upon ones goal. It is important to distinguish these two uses here (Edmonds and Moss 2001).

Prediction only makes sense if one is predicting something unknown, one starts with the encoding, one does the inference and the the decoding produces the prediction which is later checked against what is observed. There is no point in predicting known data (such as in a hold-out set). If one gets a model that actually predicts unknown social outcomes then no one is going to argue about the usefulness of the model. However, it is very difficult to really predict social outcomes. A retrospective study of 39 published economics models showed that in 37 cases that, although they seemed to fit their hold-put data well, they failed to predict the data that had became available since their publication (Mayer 1975). This book does not attempt to establish its models on the strength of their predictive power (but a few of the traffic models have had some limited predictive success described in the wider literature).

Explanation does not require prediction of unknown data, thus one can legitimately use known outcome data for explanation. Here one uses the modelling chain in a different way: an explanation for the outcomes in terms of the set-up is established by inference step. In a sense, we are using the modelling chain backwards from decoding via inference to the encoding step. The fit of the model outcomes to available data produces an explanation in terms of the model, which is specified using the encoding step. Thus the explanation obtained is in terms of the encoding. So, to get back to this book, the fit of a model to some data is in terms of the relaxed encodings that have been chosen, e.g., "social forces", or "gravity". The value of this approach depends on whether explanations of social phenomena in such terms usefully progresses our understanding or not.

As far as we can tell the models in this book fail to either predict unknown data or establish useful explanations. It seems that it has made some fundamental, albeit understandable, mistakes. In physics the micro-foundations are often well-understood and often do involve such as forces etc., thus producing explanations in terms of these is useful. In physics there is a brutal culture of testing models against unknown data to weed out those that are not predictive. In social science neither of these is true: we don't understand much about how people think or interact and the opportunities for falsifying predictive models are limited. Thus we have to question the whole basis on which this book proceeds. It may happen that the kind of models displayed can be developed into predictive models, but on the whole the evidence does not support this. It may happen that social interactions can (in certain circumstances) be simplified to such ideas as "social force" but, so far, this is unproved. Of course, one can not totally rule out the optimistic position that these may come to pass, and it seems that this is, indeed, the implicit position of Dirk Helbing here.

To summarise the book presents a series of techniques which enable the solution of analytic models in various ways (this is the "meat" of the book). The models are formulated using things such as diffusion of things between individuals, social forces, gravity and the like, in other words a rather weak encoding - one can (with practise) think of such interactions in these terms, but in ways that seriously depart from what appears to be the case. Analytic results are obtained using a series of assumptions whose validity is difficult to check but requires considerable skill in carrying out. Then the model outcomes are weakly compared to data, usually in terms of a single measurement or aspect. Thus, although the author does succeed in avoiding some of the patently stupid assumptions of economics, demonstrating a more flexible and less rigid approach to analytic modelling, it does not takes us far enough. It fails to demonstrate that such models will be useful for social science (at least in the manner they are presented), but rather simply hopes they will be.

This is not to say that analytic mathematics has no use for the social simulation community. For example, more realistic simulation and mathematical approaches can often be used fruitfully and legitimately in conjunction with each other. Mathematical analyses can be very useful to understand what is happening in complex simulation models (e.g., Deffuant and Weisbuch 2007, Izquierdo et al. 2009). This is a two-way street, simulations can be very useful in testing the assumptions of an analytic model, for example, when one has small finite populations of individuals. The problem is that the step from complex social phenomena to solvable analytic models is too drastic[2].

The book as an introduction to some potentially useful mathematics

If one considers this book, not as a demonstration that analytic models are an important way forward for the direct modelling social phenomena but simply as a primer for leaning some potentially useful maths, then it has significant merits. The last part of this review will be from this perspective.

This ambitious volume aims to provide a comprehensive account of mathematical methods for the quantitative study of social processes, with a focus on stochastic methods. We don't know of comparable accounts; thus, we welcome this as a first attempt to draw a bridge between the wide and deep corpus of knowledge in stochastical methods, and the issues under study in social and economical research.

The techniques presented are impressive, thoroughly documented, and precisely explained, even if, in some cases, in a bit too compact way that might scare the non expert reader. The new preface seems to be aware of this issue as it recommends readers "whose excitement for mathematical details is limited" (p. vi) to miss out parts of it.

From the point of view of the social simulation practitioner - the JASSS audience - the wealth of models and techniques presented is certainly a good source of inspiration. We can envision easily how a researcher, well versed in the art of programming a simulation, would be attracted by the techniques in the book and give a try applying tools like the master equation - that describes the dynamic of uniform subpopulations in a set of individuals (e.g., Shnerb et al. 2000).

However, there just aren't enough examples of applications in the book. The focus seems to be on presenting a comprehensive survey of modelling decision in the abstract. However within the space of a whole book, at least a few examples of applications should be present - where possible - explaining when and why the techniques explained have advantages over an agent simulation of the target issue. The choice of models in the books follows from the mathematical approach; the techniques presented seem to have in common the feature of being applicable to low-dimensional individual. Simple, particle-like individuals are amenable to easy treatment with approaches from statistical physics.

The techniques used in this volume are difficult to employ; a social researcher with little or no background in mathematical analysis will be disheartened by the formulas in the book. In this sense, the lack of examples and exercises (especially when reading the book in sequence, hitting immediately on the very difficult section two), is an area where some improvement could be made; after all the master equation is not such a complex mathematical object and a couple of figures and examples could really make a difference here. We suspect that the skill of the author in mathematics has made him somewhat blind to the difficulties for people with less mathematics.

To summarise, as an introduction to such mathematics, it is admirably comprehensive and goes through the various derivations in a careful step-by-step manner. However the reader will need considerable mathematical confidence and endurance to get through much of this. We could imagine that this would be an excellent reference for those with some mathematical expertise in the area to refer to. For those who straddle the simulation and mathematical communities it could be very useful.


* Notes

1Indeed we suspect another reason maths is associated with good science is precisely this hardness - it serves to protect a scientific elite and thus is sought after regardless of whether it is helpful, e.g., as in Economics.

2What a UK civil servant might call "brave" or, worse, "heroic".

3We apologise for the number of self-citations in this review, but this is because the usefulness of mathematics is an issue that has interested one of the authors (BE) since he was 10, led to him study Mathematics for his first degree, Philosophy of Modelling in his PhD and generally to obsess about it ever since.


* References[3]

CARTWRIGHT, N (1983) How the Laws of Physics Lie. New York: Oxford University Press

DEFFUANT, G and Weisbuch, G (2007) 'Probability Distribution Dynamics Explaining Agent Model Convergence to Extremism.' In Edmonds, B, Troitzsch, KG and Hernández, C (Eds.) Social Simulation: Technologies, Advances and New Discoveries. Hershey, PA: Information Science Reference, pp. 43-60

EDMONDS, B (2001) 'The Use of Models - making MABS actually work.' In Moss, S and Davidsson, P (Eds.), Multi Agent Based Simulation. Lecture Notes in Artificial Intelligence 1979. Berlin-Heidelberg: Springer Verlag, pp. 15-32

EDMONDS, B and Moss, S (2001) 'The Importance of Representing Cognitive Processes in Multi-Agent Models.' In Dorffner, G., Bischof, H. and Hornik, K. (eds.), ICANN 2001. Lecture Notes in Computer Science 2130. Berling-Heidelberg, Springer Verlag, pp. 759-766

Hesse, M (1963) Models and Analogies in Science. London: Sheed and Ward

MAYER, T (1975) Selecting Economic Hypotheses by Goodness of Fit. The Economic Journal, 85, pp. 877-883

MOSS, S and Edmonds, B (2005) Towards Good Social Science. Journal of Artificial Societies and Social Simulation 8 (4) 13, https://www.jasss.org/8/4/13.html

IZQUIERDO, L R, Izquierdo, SS, Galán, JMA, and José, I (2009) Techniques to Understand Computer Simulations: Markov Chain Analysis. Journal of Artificial Societies and Social Simulation 12 (1) 6, https://www.jasss.org/12/1/6.html

SHNERB, NM, Louzoun, Y, Bettelheim, E, and Solomon, S (2000) The importance of being discrete: Life always wins on the surface. PNASS, 97 (19), pp. 10322-10324, http://dx.doi.org/10.1073/pnas.180263697

-------

ButtonReturn to Contents of this issue