« Return to Contents of this issue

Escape from Model Land. How Mathematical Models Can Lead Us Astray and What We Can Do About It

Erica Thompson
Basic Books: New York, NY, 2022
ISBN 9781529364873 (hb)
Order this book

Reviewed by Bruce Edmonds
Centre for Policy Modelling, Manchester Metropolitan University, United Kingdom

Cover of book Occasionally a book comes along that illuminates an issue in a new way with convincing conceptual clarity – this is not such a book. Rather, it is a wide-ranging critique on the practice and use of models as and when it influences policy. It is thus to be recommended to all involved in the modelling-policy interface as an introduction to some of the pitfalls and biases that can occur.

It starts with an accessible introduction to the world of modelling and some of its difficulties/pitfalls. It then looks at three areas in more detail: financial trading, climate change and disease spread. Its main conclusion is that there are two routes to “escape” from the internal world of modelling: the quantitative route (for empirically well-validated models), and the qualitative one (where the understanding gained from modelling should be treated on a par with other expertise). It is packed with valid critique concerning how models are made, checked and used in the context of policy, ranging over many different aspects and issues. It ends with a particularly useful list of questions for policy actors and stakeholders to ask of modellers that extends and deepens previous lists, such as that in Calder et al. (2018).

However, I found this book a frustrating read. Although it comes to many of the same conclusions that Lia Adoha and I did (Adoha & Edmonds 2017), I found it tended to overgeneralise and conflate some of the issues. For clarity, I have organised my critique of the book into three areas: not separating more general critiques applicable to all kinds of representation from those specific to modelling, not sufficiently distinguishing different kinds of modelling, and the book’s overly rosy view of human discursive consideration. I now look at these in turn.

The most fundamental lack of clarity in this book is in its failure to separate criticisms of any kind of representation or abstraction from those specific to formal models. If one went through this book and replaced every instance of “model” with terms like “abstraction”, “representation”, “description” or “idea” then many of its statements would still hold (even if you restrict yourself to cases where a formal, computational or mathematical model is implied). In fact, some of the critiques are more true of kinds of representation that are less formal than the mathematical models that this book focusses upon. For example doing this swap in the last two sentences of page 7 results in the following (swapped instances of “model” in bracketed bold): “Having constructed a beautiful, internally consistent [story], it can be emotionally difficult to acknowledge that the initial assumptions on which the whole thing is based are not literally true. This is why so many reports and academic papers concerning such [ideas] either make and forget their assumptions, or test them only in a perfunctory way”. That a powerful idea or compelling story can be as convincing as a model and be based upon an equally poor evidential basis is not emphasised. Any abstraction can be dangerous if you take it too seriously, including models. Of course, models can give a pseudo-scientific credibility due to their formal nature and make it hard for non-experts to criticise, but this is equally true of academic jargon. For example, the author rightly critiques the use of Integrated Assessment models to downplay the impact of climate change – implicitly attributing the errors and influence upon the modelling rather than the fact that it is done by free-market economists who have an interest to downplay the importance of state intervention. It is hard to imagine that, if those economists had not had models, that their conclusions or influence would have been any different.

The author is not a modeller herself. This is a strength in that it helps give her an outsider’s view of how models are used in a policy context – she has not herself been beguiled by models and thus is in a good position to judge their use and impact. However, it also seems to have resulted in a lack of distinction between the different kinds and uses of models, usually treating them all as fundamentally the same, merely differing in some aspect in degree. The kind of model at which she aims most of her fire are mathematical models developed within an established modelling orthodoxy. She looks at two different scenarios for these: (a) predictive models that are validated on the basis of many previous trials (as in weather forecasting) and (b) models based more on the plausibility of its assumptions (to the modeller) that are used in a more metaphorical manner (as with the Integrated Assessment models). Erica Thompson is completely correct in her critique of the latter kind of modelling, that however impressive its technology might be, it is no more than a mathematically-expressed metaphor which should be taken as having no more weight than other expert opinions. It is not merely that it is easier to convince others that a model-based opinion is more scientific, but the modellers themselves can start to conflate their models with reality because they think using the models – a problem that can be indicated by language that fails to distinguish what is true in the model and what is claimed to be true of what is modelled (see also Edmonds 2020).

Although the author had considered epidemiological models, which can be somewhat disaggregated in nature, she gave no sign of having come across agent-based models (ABM). There were moments in reading this book where I wanted to cry out “That is why you would need an ABM!”. For example, in the section on epidemiological modelling (Chapter 9) the author laments that the modelling reduced its representation to a “few dimensions” and restricted itself to physical processes only (ignoring the social aspects). Similarly, when talking about Integrated Assessment models (which are usually a variant of economic equilibrium models) to assess the cost of climate change, she rightly criticises: (a) the simplistic nature of how these can deal with the various dimensions of impact of climate change on society and (b) the undue weight these models have had on the debate about prevention and mitigation. ABM could have allowed a much less simplistic, but still formal, explorations of the possibilities in these two cases. This use is neither prediction nor metaphoric, allowing the partial constraint of modelling by evidence, and allowing future possible “trajectories” of events to be identified – trajectories that might not be otherwise envisaged which can be added to those envisioned by stakeholders and experts in an inclusive manner (e.g. Dignum 2021).

Lastly, although the book is full of well-aimed criticism concerning the use and abuse of modelling, it is not as reflexive concerning the limitations of human discursive reasoning and dialogue. Of course, this is not the topic of this book, but it is the “null model” against which the model-based decision-making is compared. The author quotes Lempert & al. (2003): “Humans also possess various sources of knowledge – tacit, qualitative, experiential and pragmatic – that are not easily represented in traditional quantitative formulations. Working without computers, humans can often successfully reason their way through problems of deep uncertainty, provided their intuition about the system in question works reasonably well.” (p. 213 in the book). Leaving aside that many of these kinds of knowledge are more representable in an ABM, this is broadly correct – humans are relatively good at reasoning about such problems, but note the caveat! This kind of reasoning depends upon having reasonably good intuitions – something that is often far from the case. Again, the author makes some good points here, namely that bad decision-making can sometimes be due to: (a) narrowing the framework of consideration to what can be measured/modelled and (b) a lack of diversity in terms of input (both of which are discussed in Aodha & Edmonds (2017). Avoiding these can help but this only goes so far, for keeping to discursive processes and ensuring a diversity of actors involved does not prevent group-think or many of the other biases humans are prevalent to (e.g. only being able to mentally track a few entities at a time). In particular the suggestion that non-experts should make climatological models is misplaced (although I am of the view that almost anything is better than economic equilibrium modelling). It is true that these disciplines can result in a narrow orthodoxy in how to model their phenomena and, in such cases, a greater diversity of approaches would be welcome – but to assume non-experts would not make basic mistakes of fact ignores that fields like climatology are a science. We now know a lot about how the climate works and all models should be constrained by this knowledge (but not necessarily the modelling traditions of how to model within these constraints). At the very least a modeller should know when they are ignoring a factor of making a simplifying assumption – you cannot guess at this.

Everybody favours their own methods and tools, often with a certain blindness as to their downsides as a result of familiarity and expertise. This is often true of us modellers and it is good to point this out, but it is also true of the author of this book. Two of her tools are: conceptual frameworks and diverse discourse. To support the value of conceptual frameworks she gives the example of astrology, claiming that the framework of astrology can be helpful for consideration even though it lacks any empirical basis. However, if a framework acts to direct or structure thought, then this might be in destructive as well as constructive ways. The framework of astrology might be useful as part of a performance for hoodwinking clients but (a) almost any impressive and randomising performance would do as well, as long as the client is taken in and (b) it might also lead away from an assessment more grounded in knowledge. Conceptual frameworks are not neutral to usefulness (e.g. the idea of “race”). As to the conditions under which diversity helps discourse, we social simulators know this to be a tricky question, with our models indicating that polarisation or convergence on bad decisions could be as much of an outcome of such processes as positive cases. Social processes are complex – not a simple “good”. One cannot assume that achieving a diversity of voices will result in a good decision – it avoids some obvious traps but leaves others open.

One final thought – it is simply assumed in this book that the final word must go to discursive processes, and it is true that many decisions that may be informed by modelling need a political basis because they are value-based (e.g. whether keeping cafes open is more important than saving thousands of lives) but surely we can aim for approaches that use human discourse/thought alongside modelling in more complementary ways. The book focusses on escaping from Model Land, but sometimes we also need to escape from Idea Land – at least with a model and suitable empirical data you can tell when and how the model is wrong, which is often unclear with discursive ideas.


References

ADOHA, L. & Edmonds, B. (2017) ‘Some pitfalls to beware when applying models to issues of policy relevance.’ In Edmonds, B. & Meyer, R. (eds.) Simulating Social Complexity - A Handbook, 2nd edition. Berlin Heidelberg: Springer, pp. 801-822. [doi: 10.1007/978-3-319-66948-9_29]

CALDER, M., Craig, C., Culley, D., de Cani, R., Donnelly, C.A., & al. (2018) Computational modelling for decision-making: where, why, what, who and how. Royal Society Open Science, 5(6). [doi: 10.1098/rsos.172096]

DIGNUN, F. (Ed.) (2021) Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis. Berlin Heidelberg: Springer. [doi: 10.1007/978-3-030-76397-8]

EDMONDS, B. (2020) Basic Modelling Hygiene - keep descriptions about models and what they model clearly distinct. Review of Artificial Societies and Social Simulation, 22nd May 2020: https://rofasss.org/2020/05/22/modelling-hygiene/.

LEMPERT, R. J., Popper, S. W., Bankes, S. C., Center, R. P., & Monica, S. (2003) Shaping the Next One Hundred Years: New Methods for Quantitative Long-Term Policy Analysis. Rand report number MR-1626: https://www.rand.org/pubs/monograph_reports/MR1626.html.